1. Post in [email protected] attacks the entire concept of AI safety as a made-up boogeyman
  2. I disagree and am attacked from all sides for “posting like an evangelist”
  3. I give citations for things I thought would be obvious, such as that AI technology in general has been improving in capability compared to several years ago
  4. Instance ban, “promptfondling evangelist”

This one I’m not aggrieved about as much, it’s just weird. It’s reminiscent of the lemmy.ml type of echo chamber where everyone’s convinced it’s one way, because in a self-fulfilling prophecy, anyone who is not convinced gets yelled at and receives a ban.

Full context: https://ponder.cat/post/1030285 (Some of my replies were after the ban because I didn’t PT Barnum carefully enough, so didn’t realize.)

  • PhilipTheBucket@ponder.catOP
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    3 days ago

    I’m not saying that any of what you just said is not true. I’m saying that all of that can be true, and AI can still be dangerous.

    • Skiluros@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      3 days ago

      That’s not what we are discussing though. We are discussing whether aweful.systems was right or wrong in banning you. Below is the title of your post:

      Instance banned from awful.systems for debating the groupthink

      I will note that I don’t think they should be this casual with giving out a bans. A warning to start with would have been fine.

      An argument can be made that you went in to awful.systems with your own brand of groupthink; specifically complete rejection of even the possibility that we are dealing with bad faith actors. Whether you like it or not, this is relevant to any discussion on “AI safety” more broadly and that thread specifically (as the focus of the linked article was on Apollo Research and Anthropic and AI Doomerism as a grifting strategy).

      You then go on to cite a YT video by “Robert Miles AI Safety”, this is a red flag. You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue. This is not some sort of bad faith request.

      Further on you start talking about “Dunning-Kruger effect” and “deeper understanding [that YT fellow has]”. If you know the YT fellow has a deeper understanding of the issue, why can’t you explain in layman terms why this is the case?

      I did watch the video and it has nothing to do with grifting approaches used by AI companies. The video is focused on explaining a relatively technical concept for non-specialists (not AI safety more broadly in context of real world use).

      Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

      You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?

      • PhilipTheBucket@ponder.catOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        complete rejection of even the possibility that we are dealing with bad faith actors

        Incorrect. I definitely think we are dealing with bad faith actors. I talk about that at the end of my very first message. I actually agree that the study they looked at, based on asking a chatbot things and then inferring judgements from the answers, is more or less useless. I’m just saying that doesn’t imply that the entire field of AI safety is made of bad actors.

        You also claim that you can’t (or don’t want to) provide a brief explanation of your argument and you defer to the YT video. This is another red flag. It is reasonable for one to provide a 2-3 sentence overview if you actually have some knowledge of the issue.

        No. I said, “AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of.” That’s a brief explanation of my argument. People deploying AI systems which then do unexpected or unwanted things, but can get some types of tasks done effectively, and then the companies not worrying about it, is exactly the problem. I just cited someone talking at more length about it, that’s all.

        I did watch the video and it has nothing to do with grifting approaches used by AI companies.

        Yes. Because they’re two different things. There is real AI safety, and then there is AI safety grift. I was talking about the former, so it makes sense that it wouldn’t overlap at all with the grift.

        Further on you talk about non-LLM ML/AI safety issues without any sort of explanation what you are referring to. Can you please let us know what you are referring to (I am genuinely curious)?

        Sure. Say you train a capable AI system to accomplish a goal. Take “maximize profit for my company” as an example. Then, years from now when the technology is more powerful than it is now, it might be able to pursue that goal so effectively that it’s going to destroy the earth. It might decide that enslaving all of humanity, and causing them to work full-time in the mines and donate all their income to the company’s balance sheet, is the way to get that done. If you try to disable it, it might prevent you, because if it’s disabled, then some other process might come in that won’t maximize the profit.

        It’s hard to realize how serious a threat that is, when I explain it briefly like that, partly because the current AI systems are so wimpy that they could never accomplish it. But, if they keep moving forward, they will at some point become capable of doing that kind of thing and fighting us effectively if we try to make them stop, and once that bridge is crossed there’s no going back. We need to have AI safety firmly in mind as we devote so much incredible resources and effort to making these things more powerful, and currently, we are not.

        I think it’s highly unlikely that whatever that system will be, will be an LLM. The absolutely constant confusion of “AI” with “LLM” in the people who are trying to dunk on me is probably the clearest sign, to me, that they’re just babbling in the wilderness instead of trying to even bother to understand what I’m saying and why AI safety might be a real thing.

        You cite a paper; can you provide a brief summary of what the findings are and why they are relevant to a skeptical interpretation of “AI safety” messaging from organization like Apollo Research and Anthropic?

        The only relevance the paper has is that I was challenged to show that LLMs are gaining capabilities over time. That’s obviously true, but also, sure, it’s been studied objectively. They set out a series of tasks, things like adding numbers together or basic reasoning tasks, and then measured the performance of various iterations of LLM technology over time on the tasks. Lo and behond, the newer ones can do things the old ones can’t do.

        The paper isn’t itself directly relevant to the broader question, just the detail of “is AI technology getting any better.” I do think, as I said, that the current type of LLM technology has gone about as far as it’s going to go, and it will take some new type of breakthrough similar to the original LLM breakthroughs like “attention” for the overall technology to move forward. That kind of thing happens sometimes, though.

        • Skiluros@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I originally stated that I did not find your arguments convincing. I wasn’t talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).

          I didn’t find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of “criti-hype”. One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).

          You didn’t mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.

          I don’t see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.

          I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret “groupthink” can take many forms and that “counter-contrarian” arguments in of themselves are not some of magical silver bullet.

          • PhilipTheBucket@ponder.catOP
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            I wasn’t talking about AI safety as a general concept

            Okay, cool. I was. That was my whole point, that even if some is grift, AI safety itself is a real and important thing, and that’s an important thing to keep in mind.

            I think I’ve explained myself enough at this point. If you don’t know that the paperclips reference from the linked article is indicative of the exact profit maximization situation that I explained in more detail for you when you asked, or you can’t see how the paper I linked might be a reasonable response if someone complains that I haven’t given proof that AI technology has ever gained abilities over time, then I think I’ll leave you with those conclusions, if those are the conclusions you’ve reached.