It seems though that in the long run, the line between a human reading Shakespeare and coming up with their own version and computer doing the same will be thinner and thinner. After all we are really just biological computers. One could imagine a computer “thinking” of things the same “way” that we do. What then?
One could imagine a computer “thinking” of things the same “way” that we do.
One can imagine it, but that’s been the impossible nut to crack ever since the first computers. People were saying that artificial intelligence (what we now want to call AGI instead) was 5 years away since the 1970s, if not earlier.
The new generative systems seem intelligent, but they’re just really good at predicting the next word. There’s no consciousness there. As good as LLMs are, they can’t plan for the future. They don’t have goals.
The only interesting twist here is that consciousness / free will might not really exist, at least not in the form most people think of it. So, maybe LLMs are closer to being “thinking” computers not because they’re getting closer to consciousness / free will, but because we’re starting to realize free will was an illusion all along.
That’s what I mean. We elevate the human thought process as if what we come up with is more valid than what a (future) computer could think up. But is it?
So if a computer synthesizing Shakespeare is stealing, maybe so is a human doing it. But maybe then we could never create anything at all. And if we must not be blocked from it, must a machine?
So if a computer synthesizing Shakespeare is stealing
Copyright infringement is never stealing. But, as to whether it’s infringing copyright, the difference is that current laws were designed based on human capabilities. If memorizing hundreds of books word for word was a typical human ability, copyright would probably look very different. Instead, normal humans are only capable of memorizing short passages, but they’re capable of spotting patterns, understanding rhythms, and so-on.
The human brain contains something like 100 billion neurons, and many of them are dedicated to things like hearing, seeing, eating, walking, sex, etc. Only a tiny fraction are available for a task like learning to write like Shakespeare or Stephen King. GPT-4 contains about 2 trillion parameters, and every one of them is dedicated to “writing”. So, we have to think differently about whether what it’s storing is “fair” when it comes to infringing someone’s copyright.
Personally, I think copyright is currently more harmful than helpful, so I like that LLMs are challenging the system. OTOH, I can understand how it’s upsetting for an artist or a writer to see that SALAMI can reproduce their stuff almost exactly, or produce something in their style so well that it effectively makes them obsolete.
It seems though that in the long run, the line between a human reading Shakespeare and coming up with their own version and computer doing the same will be thinner and thinner. After all we are really just biological computers. One could imagine a computer “thinking” of things the same “way” that we do. What then?
One can imagine it, but that’s been the impossible nut to crack ever since the first computers. People were saying that artificial intelligence (what we now want to call AGI instead) was 5 years away since the 1970s, if not earlier.
The new generative systems seem intelligent, but they’re just really good at predicting the next word. There’s no consciousness there. As good as LLMs are, they can’t plan for the future. They don’t have goals.
The only interesting twist here is that consciousness / free will might not really exist, at least not in the form most people think of it. So, maybe LLMs are closer to being “thinking” computers not because they’re getting closer to consciousness / free will, but because we’re starting to realize free will was an illusion all along.
That’s what I mean. We elevate the human thought process as if what we come up with is more valid than what a (future) computer could think up. But is it?
So if a computer synthesizing Shakespeare is stealing, maybe so is a human doing it. But maybe then we could never create anything at all. And if we must not be blocked from it, must a machine?
Copyright infringement is never stealing. But, as to whether it’s infringing copyright, the difference is that current laws were designed based on human capabilities. If memorizing hundreds of books word for word was a typical human ability, copyright would probably look very different. Instead, normal humans are only capable of memorizing short passages, but they’re capable of spotting patterns, understanding rhythms, and so-on.
The human brain contains something like 100 billion neurons, and many of them are dedicated to things like hearing, seeing, eating, walking, sex, etc. Only a tiny fraction are available for a task like learning to write like Shakespeare or Stephen King. GPT-4 contains about 2 trillion parameters, and every one of them is dedicated to “writing”. So, we have to think differently about whether what it’s storing is “fair” when it comes to infringing someone’s copyright.
Personally, I think copyright is currently more harmful than helpful, so I like that LLMs are challenging the system. OTOH, I can understand how it’s upsetting for an artist or a writer to see that SALAMI can reproduce their stuff almost exactly, or produce something in their style so well that it effectively makes them obsolete.
deleted by creator
You sir are a turing machine.