That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”
That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same
You should watch actually AI safety researcher’s thoughts on this. Here’s the link. It’s partially overhyped, but huge strides have been made in this area and it shouldn’t be taken lightly. It’s best to be extra careful than ignorant.
Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.
This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.
The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.
Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.
We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.
How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.
Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.
deleted by creator
That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”
That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same
deleted by creator
Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem
deleted by creator
You should watch actually AI safety researcher’s thoughts on this. Here’s the link. It’s partially overhyped, but huge strides have been made in this area and it shouldn’t be taken lightly. It’s best to be extra careful than ignorant.
Here is an alternative Piped link(s):
Here’s the link
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Removed by mod
Are you always this angry? Doesn’t it get exhausting?
Removed by mod
And that somehow means we shouldn’t do OCR anymore, or image classification, or text to speech, or speech to text, or anomaly detection, or…?
Neural networks are really good at pattern recognition, e.g. finding manufacturing defects in expensive products. Why throw all of this away?
Exactly. LLMs are just a Chinese room
Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.
The problem here is that intelligence is a beetle
This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.
The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.
Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.
We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.
How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.
deleted by creator
Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.
deleted by creator