SubstantialNothingness [none/use name]

  • 0 Posts
  • 21 Comments
Joined 11 months ago
cake
Cake day: November 20th, 2023

help-circle











  • I wouldn’t do it lol. But you could create a workflow in which you could make similar products using only prompts and some custom tooling.

    The creator of this piece said that it can’t be made by a prompt. What they are insinuating, is that different inputs make their outputs more or less superior. But that’s such an arbitrary and ignorant argument when you come at it from the view of LLM product design.

    It used to be that parameters were input at the command line. Obviously this becomes impractical for mature use cases, so UI frontends were created to give us sliders and checkboxes. That matches the kind of environments that they are already using. But these are also typically power users (by interest, not necessarily competency) and we’ve seen a new iteration of UI designs for consumer LLM products like Dall-E and Midjourney. Just because the LLM has a user-friendly skin does not mean the functionality is any less capable - you can pass parameters in prompts, for example.

    But if you were to have a use case for a product that uses ControlNet with only prompts, like I said, it would benefit from extra tooling: Presets, libraries, defaults, etc. With these in place more powerful functionality would be more quickly accessible through the prompt format. I’m not Nostradamus, but one area in which simpler inputs are much more desirable than power user dashboards, is in products that are intended to be used by drivers (not all of which are related to the act of driving itself - like voice2text messaging).

    I guess my point is that the argument that “my sliders are better than your prompt” is like saying the back door gets you into the house better than the front door, it’s really nothing more than a schoolyard pissing contest that shows a limited perspective on the matter.



  • I’m pretty sure they are saying that this piece of… well, let’s just say this “piece”, wasn’t created by prompting. They are trying to place themselves above the most proficient prompters.

    Which infers that they used methods that give the creator more control, such as ControlNet. ControlNet makes a structure for the generative AI to build on, which means you can guarantee certain features appear in the final output. So you could, for example, reskin a real building with fantastic imagery while retaining the building’s original form.

    But unless someone is capable of tailoring their own ControlNet-style software, they’re still really just a script kiddie. (In this case, a script kiddie with no taste but a lot of problems.)

    Of course, you could also train a LLM to utilize ControlNet settings solely through prompts, since LLMs allow us to use our own personal vocabulary as a high-level Natural Language Programming language. However script kiddies are fundamentally reactionary in their own right. They aren’t good at thinking for themselves (or perhaps at all). They will eat what is put on their plate, but if we place them in the kitchen then we will surely all starve.





  • I think it’s probably because recall is often a demonstration of proficiency - think how consuming/reading language is easier than producing/writing it. Not the only sign of proficiency, but one of them.

    On the other hand, we benefit more from current technology by being proficient with references, and proficiency over an entire field is now inefficient and/or unattainable. Even in languages - native languages at that - most of us only become proficient at producing contemporary styles, whereas it often takes specialists to decypher old texts with appropriate linguistic and historical context. But now chatbots can fill in for the specialist by acting as a more widely available and in-depth reference, I guess.