- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Given that Alex Jones has “interviewed” ChatGPT on air twice now, I’m going to say no.
I mean, Alex Jones has more skin in the grift than most conspiracy theorists, so he’s not likely to do a 180 quickly, if at all. Also, it seems like he’s been drunk more often on the latest episodes, so maybe he’s having an existential crisis started by being fact-checked in real time by a robot.
We can’t know what his internal state is, but I do agree that it does not seem to have slowed his pace at all on the surface.
The amount of conspiracy theories I’ve heard in the past year or so involve AI in some way.
Yesterday a friend and I were talking and he said the government was using AI to hack his brain.
I don’t think a chat bot is going to help that situation.
If the AI wanted to talk me out of conspiracy theories, why don’t they use the brain signals to control us to thinking that way? Do the microwaves from the circuits behind the walls all go out of service all of a sudden?
This is just classic silicon valley trying to “innovate”, when their real plan was to muscle out CIA and FBI work to non-union contractors.
No ai can’t because no one believes a word they say. There are so many guardrails put in place that speaking to ai chatbots feels like talking to corporate HR
Yeah, I feel like trusting ai is going to lead people down dangerously convincing rabbit holes
Pretty funny to posit that a LLM chatbot ought to talk us out of conspiratorial thinking while running on a corporate GPU farm absolutely BLASTING through electricity and copyright and IP violations because it’s legally convenient for the powerful. Please post more thought provoking unreasonable propaganda.
Huh that’s funny, because I run a local LLM even on my laptop.
And fuck yes, I love IP violations. Makes me want to go pirate some media and draw fan art.
Please post some more ignorant rage.
Its wild how some people’s blind hate of gen AI has got them thinking “corporate control of culture is good actually”
Have you trained that LLM?
Why would I want to have?
Because if you did not then it doesn’t matter if you run it locally
Uh yes it does.
I’ve let the corporations spend the time, money, and resources to train a model.
They get zero benefit when I run it locally. I get all the benefit.
The point I’m trying to make is to your first response to CondensedPossum being that you’re still ruining a corporate LLM with bias.
I guess this is all part of the social sciences side of chatbots and something to keep an eye on, and folks have to start somewhere…but I kind of feel that the technology isn’t really at the point where teaching people in general with a chatbot is an ideal solution.
AI is a conspiracy theory—companies are just hiring people in lower-income countries to impersonate machines!
(/s, of course, but with just enough truth to it that there’s probably someone somewhere out there who thinks the above statement is plausible.)
Probably not given our loved ones often can’t
deleted by creator
deleted by creator
This is the first time in a long time I’ve heard of a use case for AI that is genuinely useful
It’s a job very few people will want to do, it can do the job as well as, if not better than a human, and it’s a use case that is genuinely useful.
I wish them luck.