Apparently, stealing other peopleā€™s work to create product for money is now ā€œfair useā€ as according to OpenAI because they are ā€œinnovatingā€ (stealing). Yeah. Move fast and break things, huh?

ā€œBecause copyright today covers virtually every sort of human expressionā€”including blogposts, photographs, forum posts, scraps of software code, and government documentsā€”it would be impossible to train todayā€™s leading AI models without using copyrighted materials,ā€ wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit ā€œmisconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.ā€

  • Pup Biru
    link
    3ā€¢4 months ago

    you know how the neurons in our brain work, right?

    because if not, well, itā€™s pretty similarā€¦ unless you say thereā€™s a soul (in which case we canā€™t really have a conversation based on fact alone), weā€™re just big olā€™ probability machines with tuned weights based on past experiences too

    • Phanatik
      link
      fedilink
      5ā€¢4 months ago

      You are spitting out basic points and attempting to draw similarities because our brains are capable of something similar. The difference between what youā€™ve said and what LLMs do is that we have experiences that we are able to glean a variety of information from. An LLM sees text and all itā€™s designed to do is say ā€œx is more likely to appear before y than zā€. If you fed it nonsense, it would regurgitate nonsense. If you feed it text from racist sites, it will regurgitate that same language because thatā€™s all it has seen.

      Youā€™ll read this and think ā€œthatā€™s what humans do too, right?ā€ Wrong. A human can be fed these things and still reject them. Someone else in this thread has made some good points regarding this but Iā€™ll state them here as well. An LLM will tell you information but it has no cognition on what itā€™s telling you. It has no idea that itā€™s right or wrong, itā€™s job is to convince you that itā€™s right because thatā€™s the success state. If you tell it itā€™s wrong, thatā€™s a failure state. The more you speak with it, the more fail states it accumulates and the more likely it is to cutoff communication because itā€™s not reaching a success, itā€™s not giving you what you want. The longer the conversation goes on, the more crazy LLMs get as well because itā€™s too much to process at once, holding those contexts in its memory while trying to predict the next one. Our brains do this easily and so much more. To claim an LLM is intelligent is incredibly misguided, it is merely the imitation of intelligence.

      • Pup Biru
        link
        1ā€¢
        edit-2
        4 months ago

        but thatā€™s just a matter of complexity, not fundamental difference. the way our brains work and the way an artificial neural network work arenā€™t that different; just that our brains are beyond many orders of magnitude bigger

        thereā€™s no particular reason why we canā€™t feed artificial neural networks an enormous amount of ā€¦ letā€™s say tangentially related experiential information ā€¦ as well, but in order to be efficient and make them specialise in the things we want, we only feed them information thatā€™s directly related to the specialty we want them to perform

        thereā€™s someā€¦ ā€œpre trainingā€ or ā€œpre-existing stateā€ that exists with humans too that comes from genetics, but iā€™d argue thatā€™s as relevant to the actual task of learning, comprehension, and creating as a BIOS is to running an operating system (that is, a necessary precondition to ensure the correct functioning of our body with our brain, but not actually what youā€™d call the main function)

        iā€™m also not claiming that an LLM is intelligent (or rather iā€™d prefer to use the term self aware because intelligent is pretty nebulous); just that the structure it has isnā€™t that much different to our brains just on a level thatā€™s so much smaller and so much more generic that you canā€™t expect it to perform as well as a human - you wouldnā€™t expect to cut out 99% of a humans brain and have them be able to continue to function at the same level either

        i guess the core of what iā€™m getting at is that the self awareness that humans have is definitely not present in an LLM, however i donā€™t think that self-awareness is necessarily a pre-requisite for most things that we call creativity. i think thatā€™s itā€™s entirely possible for an artificial neural net thatā€™s fundamentally the same technology that we use today to be able to ingest the same data that a human would from birth, and to have very similar outcomesā€¦ given that belief (and iā€™m very aware that it certainly is just a belief - we arenā€™t close to understanding our brains, but i donā€™t fundamentally thing thereā€™s anything other then neurons firing that results in the human condition), just because you simplify and specialise the input data doesnā€™t mean that the process is different. you could argue that itā€™s lesser, for sure, but to rule out that it can create a legitimately new work is definitely premature

    • @[email protected]
      link
      fedilink
      2ā€¢4 months ago

      ā€œSoulā€ is the word we use for something we donā€™t scientifically understand yet. Unless you did discover how human brains work, in that case I congratulate you on your Nobel prize.

      You can abstract a complex concept so much it becomes wrong. And abstracting how the brain works to ā€œitā€™s a probability machineā€ definitely is a wrong description. Especially when you want to use it as an argument of similarity to other probability machines.

      • Pup Biru
        link
        1ā€¢
        edit-2
        4 months ago

        ā€œSoulā€ is the word we use for something we donā€™t scientifically understand yet

        thatā€™s far from definitive. another definition is

        A part of humans regarded as immaterial, immortal, separable from the body at death

        but since we arenā€™t arguing semantics, it doesnā€™t really matter exactly, other than the fact that itā€™s important to remember that just because you have an experience, belief, or view doesnā€™t make it the only truth

        of course i didnā€™t discover categorically how the human brain works in its entirety, however most scientists iā€™m sure would agree that the method by which the brain performs its functions is by neurons firing. if you disagree with that statement, the burden of proof is on you. the part we donā€™t understand is how it all connects up - the emergent behaviour. we understand the basics; thatā€™s not in question, and you seem to be questioning it

        You can abstract a complex concept so much it becomes wrong

        itā€™s not abstracted; itā€™s simplifiedā€¦ if what youā€™re saying were true, then simplifying complex organisms down to a petri dish for research would be ā€œabstractedā€ so much it ā€œbecomes wrongā€, which is categorically untrueā€¦ itā€™s an incomplete picture, but that doesnā€™t make it either wrong or abstract

        *edit: sorry, it was another comment where i specifically said belief; the comment you replied to didnā€™t state that, however most of this still applies regardless

        i laid out an a leads to b leads to c and stated that itā€™s simply a belief, however itā€™s a belief thatā€™s based in logic and simplified concepts. if you want to disagree thatā€™s fine but donā€™t act like you have some ā€œevidenceā€ or ā€œproofā€ to back up your claimsā€¦ all weā€™re talking about here is belief, because we simply donā€™t know - neither you nor i

        and given that all of this is based on belief rather than proof, the only thing that matters is what we as individuals believe about the input and output data (because the bit in the middle has no definitive proof either way)

        if a human consumes media and writes something and it looks different, thatā€™s not a violation

        if a machine consumes media and writes something and it looks different, youā€™re arguing that is a violation

        the only difference here is your belief that a human brain somehow has something ā€œmoreā€ than a probabilistic model going onā€¦ but again, thatā€™s far from certain