• Ilandar
    link
    fedilink
    arrow-up
    7
    ·
    9 days ago

    I won’t pretend to know what the solution is but I am very grateful that he is raising the issue publicly. We need to start working on this stuff now, before it gets completely ingrained and normalised within society like algorithmic social media did.

  • ⸻ Ban DHMO 🇦🇺 ⸻OPM
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 days ago

    Whilst I agree that AI shouldn’t be used in elections, would a ban on such use lull people into a false sense of security?

    • MHLoppy@fedia.io
      link
      fedilink
      arrow-up
      6
      ·
      9 days ago

      As the article already states, even with a ban it would still be a reactive process of taking it down after-the-fact, which we already know Isn’t Great™ based on other times when the juicy-but-false news headlines that come out first get more eyeballs than the later corrections.

      Still, on balance it’s probably better than a complete free-for-all? I guess there’d need to be clear lines about what’s okay / not, as it seems very easy to overdo it if the language is too vague (e.g., in an extreme case, accidentally banning all manipulation of images / video in political contexts).

    • spiffmeister
      link
      fedilink
      arrow-up
      4
      ·
      9 days ago

      Fines that were actually enforced properly might work. In general though I think the feds probably need to run an education campaign about AI.

      • No1
        link
        fedilink
        arrow-up
        5
        ·
        9 days ago

        need to run an education campaign about AI

        Well that could backfire. It might require them to teach critical thinking!

    • No1
      link
      fedilink
      arrow-up
      2
      ·
      9 days ago

      It’s like they think if they legislate to ban it, it couldn’t possibly happen…

      • Gorgritch_Umie_Killa
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 days ago

        I’m not necessarily advocating it. I put the link up because its a useful addition for a post like this.

        In saying that the idea that bans don’t work takes the ‘war on drugs/prohibition’ approaches out of context.

        1. I’m writing this from memory (of reading, not experience :p ) because i don’t have time to go and reread it all so apologies if details are wrong, the essence should be there though,

        Prohibition was enforced on the population by ideological puritans in power at the time. It seems no clear popular support backed or accepted the prohibition’s rationale and is a driving reason why it was so hard to maintain and dropped.

        ‘War on drugs’ ideas should be dropped because the evidence shows the American public have not benefited from the policy position, and in fact the ‘War on Drugs’ has likely increased the costs and harms associated with the drugs trade rather than diminished them. So, while we can say the ‘War on Drugs’ enjoyed popular support, in contrast to Prohibtion, the health, economic, violence, and consumption patterns have all trended negative against the policy over the period, meaning the policy has failed in its stated objective and needs changing.

        The point of these two examples being referred to when considering other bans isn’t to sit on the ideological plane of libertarians and shout “All bans are bad, you won’t tread on me.” But to consider the negative implications of a proposed ban and how its reality could differ from the vision, and adjust accordingly.

        1. There are enforced bans throughout society, think driving without a seatbelt, driving on the wrong side of the road, electrician sign offs, work with and manufacture of radioactive materials, essentially anything the enforceabke by the police and courts you can argue is a ‘banned practice’.

        2. A ban targeting political party practices is far more enforceable than population wide bans, its a smaller ‘market’, with known players, to regulate. I beleive Lobby groups in Aus also have to identify themselves when they put out attack ads.

        All that said, if a ban was implemented it doesn’t stop AI use in political advertising, but it does set the tone, and that means a lot. We as a society can’t stop murders, but we can build up barriers against their use as a legitimate tool of pursuing ones goals.

        • Norah - She/They@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 days ago

          Okay, so I have a few points in rebuttal, but I think we’re generally on the same page. I would absolutely support a ban on political parties using deepfakes of opponents in attack ads or otherwise broadcasting them. Infact, I would support more than just fines. If it was discovered after the polls, I would support a full recall of the election. I’d suggest deregistration from that election as well, but I’d hope the voting public can show it’s distaste for that behaviour. I’d also support some level of required due diligence for news media in ensuring what they’re publishing is real. Though there has to be consideration of the suppression of important information. Can you imagine if the Watergate tapes were never released because it couldn’t be proved beyond a shadow of a doubt that they were real?

          So I guess that brings me to my problem with this petition. It seems to be asking for a general ban on the entire population, and that’s just not something I can support. There is, and should be, higher standards of ethics expected of both of these groups. However, I don’t think it should be enforced on the average citizen. They just don’t have the ability to get that stuff in front of eyeballs without help.

          The other side of this (I know we’d basically be pissing into the wind with how small we are) is that regulations targeting the companies themselves needs to be a part of this. You should never be able to type in the name of a notable figure (or anyone, really) into a generative AI and get it to spit out an image/video of that person. It’s being used to make porn of celebrities which is incredibly damaging, but there’s now been cases of students creating it of other students by feeding it pictures. If AI companies won’t create safeguards, we need to make them. As it stands, it requires an immense amount of power to train picture/video-generating AIs that can fool people. So targeting larger actors makes the most sense.

          • Gorgritch_Umie_Killa
            link
            fedilink
            arrow-up
            1
            ·
            7 days ago

            Your first paragraph about the politics and news media, yeah generally agree with.

            The point about ethics in the news media i see as part of the problem surrounding the Australian Press Council and their principle funders being the organisations they supposedly investigate. Its an ethics system set up to generate conflicts of interest. The media have a watch dog that’s more like a chew toy.

            We are calling on Minister Farrell to legislate a ban on the use of deepfakes in elections before the next federal election. From the change.org petition.

            Seems that Pocock is referring to AI deepfakes’ use in elections. Maybe i haven’t read the parts your referring to?

            While i see no actual wording because i’ve not seen an example of a proposed legislation, I would assume that its wording would be vague enough to potentially catch many people, likely including individual citizens. Aus courts would, as they always try to, sort out the chaff through testing intent of parties accused of using deepfakes. But thats me speculating, and i see nothing other than the quote above to suggest how far they’d go, so “in elections”?.

            Your last paragraph is the part i disagree with you on. Its not pissing in the wind to regulate a large company, in fact its a necessity for smaller countries like Australia.

            Like you say, targeting these large tech companies makes sense, and like the news media bargaining showed, it can be done. Whatever we think about that particular issue, the tech companies played the governments game. Large countries can find all sorts of excuses to not find consensus, and sometimes need that leading example, and in telecommunications cases it can’t really be a US State that takes the lead, like they can on other issues, but a separate country. I think its due to telecommunications law being federal jurisdiction.