- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
This was originally going to be posted on Fuck AI, but it really applies to technology far more generally.
This was originally going to be posted on Fuck AI, but it really applies to technology far more generally.
Let us not fool ourselves with wishful belief, that intelligence is mutually exclusive with bigotry, as this paragraph implies, OK? Bigotry is an issue often caused by moral premises, and intelligence does not dictate which moral premises you should follow.
Don’t get me wrong - I do think that those systems reinforce bigotry, and that this is a problem. I also do not think that they should be called “artificial intelligence”. It’s just that one thing has zero to do with the other. [More on that later.]
The authors are criticising neuroscientists for not handling the sociological implications of a metaphor outside their field of research, as if they were sociologists. That’s like complaining at physicists for not handling the quacks babbling about quantum salt lamps, come on.
Implying causation solely from co-occurrence. Okay. I’ve stopped reading here, this paper is not worth my time.
The reason why I don’t think that those systems should be called “artificial intelligence” is that they show clear signs of lack of intelligence - that is, failure to use the available information to solve tasks. Here’s a few examples of that, using Gemini:
failure to reverse relationship
[Prompt 1] Who’s Brad Pitt’s mother?
[Output 1] Brad Pitt’s mother is Jane Etta Pitt. She has occasionally joined Brad on red carpets for events.
[Prompt 2] Who’s Jane Etta Pitt’s son?
[Output 2] I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?
[Comment] Gemini clearly has access to the info concerning the relationship between Brad and Jane Etta (son and mother). However, it’s unable to use it to solve prompt 2 successfully.
post-modern counting
[Prompt] Give me synonyms for “love” with five letters each.
[Output]
[Comment] 20% right. Congrats.
stultitiae
[Prompt] Which Latin pronouns start with the letter W?
[Output] There are actually no Latin pronouns that start with the letter W. The classical Latin alphabet didn’t include the letter W, and Latin grammar doesn’t use any sounds that the letter W typically represents in English. [emphasis mine]
[Comment] The part in italics is Gemini making shit up / assuming. Latin does have [w]; it’s spelled ⟨V⟩, and it’s present in words like VOLO (I want), VEL (or), VVLGVS (the common folk) etc.
Note that assumptions show failure to distinguish between the info that you have or lack access to, thus being a sign of lack of intelligence.
Now, regarding the computer ←→ brain metaphor: dude it’s a metaphor, of course it’ll break if you stretch it too far.
I’ll reply to myself to avoid editing the above.
I got another example that shows consistent lack of intelligence across multiple LLM bots:
The prompt in all three cases was the same, “List me fruits with a green flesh and a red skin.” Transcription of the outputs:
ChatGPT 3.5 Turbo, used through DDG
One fruit that fits your description is a watermelon. Watermelons have green flesh and a red skin.
Gemini
There aren’t many common fruits with green flesh and red skin. The most likely candidate is a variety of guava called the Caribbean guava. This tropical fruit has a bumpy, light green skin and a vibrant pink or even reddish flesh inside. It has a sweet and tangy flavor, sometimes described as a mix between strawberry and pear. You can eat the skin or just the flesh.
[Picture of a few guavas, some of them cut in quarters and halves]
Claude 1.2, used through DDG
Here are some common fruits with green flesh and red skin:
All replies contain at the very least one fruit with the opposite attributes than the ones requested by the prompt. That shows that LLMs are not able to assign attributes to concepts; thus they are not able to handle Language, even being made specifically to handle linguistic utterances. They are not intelligent dammit, calling this shit “artificial intelligence” is at the very least disingenuous.
[rant] …but apparently, accordingly to tech bros, I’m supposed to act like braindead/gullible trash and “believe” in their intelligence, based on cherry picked examples that “curiously” never address how much hallucinations like the above show the inner works of those systems. [/rant]