Well, this escalated quickly. So is this the end, or will the mods create an OpenAI megathread? ;)

  • cwagner@beehaw.orgOP
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    I’d say this is an amazing result for MS. Not only is their investment mostly Azure credits, so OpenAI is dependent on MS, now they also got Altman and his followers for themselves for more research.

    • blaine@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      Good for MS, bad for humanity. I believe one of the worst possible timelines for the average human is one where for-profit capitalist entities control access to AGI and horde all the benefits accrued from it. OpenAI was founded specifically to avoid this - with a complicated governance structure designed to ensure true AGI would end up owned by humanity with the benefits shared by all.

      The OpenAI board and top researchers made a desperate bid to prioritize safety over profits, and even with that elaborate governance structure behind them capitalism still seems to have found a way to fuck us.

      Today we saw Satya Nadella and Sam Altman steer humanity further from a possible utopia and closer to… Cyberpunk 2077.

      Good luck everyone!

      • u_tamtam@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Don’t be too worried about AGI being a thing in the short. And the only thing which I find to suck with respect to consolidation is that contemporary AI requires a lot of hardware thrown at it while cloud services (providing this hardware on demand) are practically the same triopoly. That sucks if you want to be the next AI startup. But academia is mostly unaffected, and far from lagging behind (multiple open source LLMs are compelling alternatives to chatgpt and not benefitting from OpenAI’s millions of marketing and hype doesn’t make them less valuable)

        • blaine@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Fair enough, but the short term track we’re on is still a hellish dystopia. For the societal damage I’m worried about to happen, we don’t really need AGI as you are probably defining it. If we use the OpenAI definition for AGI, “systems that surpass human capabilities in a majority of economically valuable tasks”, I’d argue that the technology we have today is practically there already. The only thing holding back the dystopia is that corporate America hasn’t fully adapted to the new paradigm.

          • Imagine a future where most fast food jobs have been replaced by AI-powered kiosks and drive-thrus.

          • Imagine a future where most customer service jobs have been replaced by AI-powered video chat kiosks.

          • Imagine a future where most artistic commission work is completed by algorithms.

          • Imagine a future where all the news and advertising you read or watch is generated specifically to appeal to you by algorithms.

          In this future, are the benefits of this technology shared equitably so that the people who used to do these jobs can enjoy their newfound leisure time? Or will those folks live in poverty while the majority of their incomes are siphoned off to the small fraction of the populace which are MS investors?

          I think we all know the answer to that one.

          • u_tamtam@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            To help you out with the monopolistic/capitalist concern: https://simonwillison.net/2023/May/4/no-moat/
            tl;dr: OpenAI’s edge with ChatGPT is essentially minor (according to the people from within), and the approach of building ever larger and inflexible models is challenged by (technologically more accessible and available) smaller and more agile models

            Imagine a future where most fast food jobs have been replaced by AI-powered kiosks and drive-thrus.

            Funny you bring this one up :)
            https://marshallbrain.com/manna1

            Imagine a future where most customer service jobs have been replaced by AI-powered video chat kiosks. Imagine a future where most artistic commission work is completed by algorithms.

            To a large extent, we have been there for a long time:
            https://www.youtube.com/watch?v=7Pq-S557XQU

            This, and the theory of bullshit jobs:
            https://strikemag.org/bullshit-jobs/

            were formative reads to me.

            The end-game is pretty clear: we have reached the limits to the model on which our current society is built (working jobs to earn money to spend money to live). We now have excess supply of the essential goods to sustain lives and scarcity of jobs at the same time. We will have soon to either accept that working isn’t a mean to an end (accept universal basic income and state interventionism), or enter a neofedalism era where resources are so consolidated that the illusion of scarcity can be maintain and justify the current system (which essentially the bullshit-jobs is all about).

            It’s perhaps the most important societal reform our species will know, and nobody’s preparing for it :)

            Imagine a future where all the news and advertising you read or watch is generated specifically to appeal to you by algorithms.

            This is already the case today:
            https://en.wikipedia.org/wiki/Filter_bubble

            And this is already weaponized (e.g. TikTok’s algorithm trying to steer the youth towards education and science in China and towards … something completely different in the rest of the world).

            • blaine@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              @u_tamtam

              It doesn’t really matter if Microsoft/OpenAI are the only ones with the underlying technology as long as the only economically feasible way to deploy the tech at scale is to rely on one of the big 3 cloud providers (Amazon, Google, Microsoft). The profits still accrue to them, whether we use a larger/inflexible or smaller/flexible model to power the AI - the most effective/common/economical way for businesses to leverage it will be as an AWS service or something similar.

              Are you saying you’re cool with neofeudalism? Or just agreeing that this is yet another inevitable (albeit lamentable) step towards it?

              • u_tamtam@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                It doesn’t really matter if Microsoft/OpenAI are the only ones with the underlying technology as long as the only economically feasible way to deploy the tech at scale is to rely on one of the big 3 cloud providers (Amazon, Google, Microsoft).

                Yup, but as the “no moat” link I posted implied, at least for LLMs, it might not be required to spend very much in hardware to be almost as good as ChatGPT, so that’s some good news.

                Are you saying you’re cool with neofeudalism? Or just agreeing that this is yet another inevitable (albeit lamentable) step towards it?

                Oh, crap, no, sorry if I wasn’t clear. I believe we are at the crossroads with not much in the middle between our society evolving into extensive interventionism, taxation and wealth redistribution (to support UBI and other schemes for the increasingly large unemployable population) or neufeudalism. I don’t want billionaires and warlords to run the place, obviously. And I’m warry about how the redistribution would go with our current politicians and the dominant mindset associating individual merit to wealth and individualistic entrepreneurship.