• millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    60
    ·
    edit-2
    5 months ago

    I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.

    But what seems much more likely, given what we’ve seen already, is corporations pushing AI that they know isn’t really capable of what they say it is and everyone going along with it because of money and technological ignorance.

    You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what they’re talking about, endless fake reviews and articles. It’s already hurt people, but so far only on a small scale.

    But the profitablity of pushing AI early, especially if you’re just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.

    That’s what’s scary about it. It isn’t AI itself, it’s AI as a vector for corporate recklessness.

    • Melody Fwygon@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      5 months ago

      It isn’t AI itself, it’s AI as a vector for corporate recklessness.

      This. 1000% this. Many of Issac Asimov novels warned about this sort of thing too; as did any number of novels inspired by Asimov.

      It’s not that we didn’t provide the AI with rules. It’s not that the AI isn’t trying not to harm people. It’s that humans, being the clever little things we are, are far more adept at deceiving and tricking AI into saying things and using that to justify actions to gain benefit.

      …Understandably this is how that is being done. By selling AI that isn’t as intelligent as it is being trumpeted as. As long as these corporate shysters can organize a team to crap out a “Minimally Viable Product” they’re hailed as miracle workers and get paid fucking millions.

      Ideally all of this should violate the many, many laws of many, many civilized nations…but they’ve done some black magic with that too; by attacking and weakening laws and institutions that can hold them liable for this and even completely ripping out or neutering laws that could cause them to be held accountable by misusing their influence.

    • 0x815@feddit.de
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      Yes. We need human responsibility for everything what AI does. It’s not the technology that harms but human beings and those who profit from it.

    • localhost@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      I don’t think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.

      I think the more likely scenario is also more grim:

      AI actually does continue to advance and gets better and better displacing more and more jobs. It doesn’t happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
      If we’re unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.

      Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
      Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.

      There are also other AI-related pitfalls that humanity may fall into in the meantime - automated terrorism (e.g. swarms of autonomous small drones with explosive charges using face recognition to target entire ideologies by tracking social media), misaligned AGI going rogue (e.g. the famous paperclip maximizer, although probably not exactly this scenario), collapse of the internet due to propaganda bots using next-gen generative AI… I’m sure there’s more.

      • Juice@midwest.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        5 months ago

        Ai doesn’t get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you can’t keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet I’ll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, that’s you. You don’t know how the economy works and you don’t know how ai works so you’re just believing all this roku’s basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But I’m telling you, don’t be a sucker. Until there’s a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldn’t worry too much about Ellison’s AM becoming a reality

        • localhost@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and there’s no cutoff in sight.

          That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

          • Ilandar
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

            I actually don’t think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old “big tech is only about hype, techbros are all charlatans from the capitalist elite” lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.

          • Juice@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            The difference between gpt-3 and gpt-4 is number of parameters, I.e. processing power. I don’t know what the difference between 2 and 4 is, maybe there were some algorithmic improvements. At this point, I don’t know what algorithmic improvements are going to net efficiencies in the “orders of magnitude” that would be necessary to yield the kind of results to see noticeable improvement in the technology. Like the difference between 3 and 4 is millions of parameters vs billions of parameters. Is a chatgpt 5 going to have trillions of parameters? No.

            Tech literate people are apparently just as susceptible to this grift, maybe more susceptible from what little I understand about behavioral economics. You can poke holes in my argument all you want, this isn’t a research paper.

        • verdare [he/him]@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).

          I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.

          I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.

          • Juice@midwest.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            5 months ago

            I wasn’t debating you. I have debates all day with people who actually know what they’re talking about, I don’t come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. I’m not saying the things you’re describing are technically impossible, I’m saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesn’t work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.

            There’s gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we can’t stop producing electricity to run these scam machines because someone might lose money.

    • Ilandar
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Yes, it’s very concerning and frustrating that more people don’t understand the risks posed by AI. It’s not about AI becoming sentient and destroying humanity, it’s about humanity using AI to destroy itself. I think this fundamental misunderstanding of the problem is the reason why you get so many of these dismissive “AI is just techbro hype” comments. So many people are genuinely clueless about a) how manipulative this technology already is and b) the rate at which it is advancing.

    • coffeetest@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Calling LLMs, “AI” is one of the most genius marketing moves I have ever seen. It’s also the reason for the problems you mention.

      I am guessing that a lot of people are just thinking, “Well AI is just not that smart… yet! It will learn more and get smarter and then, ah ha! Skynet!” It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesn’t have any idea what it is saying, actually means.