Not everyone needs to have an opinion on AI


National Novel Writing Month is an American initiative that has become a worldwide pastime, in which participants attempt to write a 50,000-word manuscript in the month of November. Some of these first drafts eventually become novels — the initial version of what became Erin Morgenstern’s The Night Circus started life as a NaNoWriMo effort — but most don’t. And many participants cheerfully admit they are writing for the pleasure of creation rather than out of any expectation that they will gain either money or prestige from the activity.

In recent years, NaNoWriMo has been plagued by controversies. This year, the organisation has been hit by an entirely self-made argument, after declaring that while it does not have an explicit position on the use of generative artificial intelligence in writing, it believes that to “categorically condemn the use of AI writing tools” is both “ableist and classist”. (The implication that working-class people and people with disabilities can only write fiction with the help of generative AI, however, is apparently A-OK.)

The resulting blowback saw one of its board, the writer Daniel José Older, resign in disgust. (NaNoWriMo has since apologised for, and retracted, its initial statement.)

There is very little at stake when you participate in NaNoWriMo, other, perhaps, than the goodwill of the friends and relations you might ask to read your work afterwards. Sign-ups on the website can talk to other participants on their discussion forums and are rewarded for hitting certain milestones with little graphics marking their achievement. If you want to write an experimental novel called A Mid-Career Academic’s Reflections Upon His Divorce that is simply the same four-letter expletive repeated over and over again, nothing is stopping you from doing so. If you want to type the words “write the first 50,000 words of a coming-of-age novel in the style of Paul Beatty” into ChatGPT and submit the rest, you can do so. In both cases, it is your own time you are wasting.

The whole argument is exceptionally silly but does hold two useful lessons.

One is that organisations and companies should have fewer opinions. Quite why NaNoWriMo needs to have an opinion about the use of generative AI is beyond me. Organisations should have a social conscience, but that should be limited to things they actually directly control. They should care about fairness when hiring, about the effects that their supply chains have on the world, just as NaNoWriMo should care about whether its discussion forums are well moderated (the subject of another previous controversy). But they should have little or no interest in issues that they have no meaningful way to stop or prevent, like what participants do with AI.

A good rule of thumb for an organisation considering whether to make a statement about a topic is to ask itself what material changes within its control it proposes to make as a result of doing so — and why. Those changes might range from donating money to hiring. For example, the cosmetics retailer Lush has given large amounts of money to police reform charities, while Julian Richer, the founder of Richer Sounds, home entertainment chain, went so far as to turn his business into an employee-owned trust in 2019.

But if an organisation is either unwilling or incapable of making real changes to how it operates or spends money, then nine times out of ten that is an indication that it will gain very little and add very little from speaking out.

The second lesson concerns how organisations should respond to the widespread use and adoption of generative AI. Just as NaNoWriMo can’t stop me asking Google Gemini to write a roman-à-clef about a dashingly handsome columnist who solves crimes, employers can’t reliably stop someone from writing their cover letter by the same method. That doesn’t mean they should necessarily embrace it, but it does mean that some forms of assessment have, inevitably, become a test of your ability to work well with generative AI as much as your ability to write or to research independently. Hiring, already one of the most difficult things any organisation does, is already becoming more difficult, and probation periods will become more important as a result.

Both lessons have something in common: they are a reminder that organisations shouldn’t sweat the stuff outside of their control. Part of writing a good novel is choosing the right words in the right places at the right time. So too is knowing when it is time for an organisation to speak — and when it should stay silent.


Posting != Endorsing the writer’s views.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 months ago

    Quite why NaNoWriMo needs to have an opinion about the use of generative AI is beyond me.

    A major sponsor of NaNoWriMo is ProWritingAid, who are pushing their new AI Sparks genAI writing tool.

    So, at a guess, I’d say that’s why.

  • Hammerheart@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    2 months ago

    it believes that to “categorically condemn the use of AI writing tools” is both “ableist and classist”. (The implication that working-class people and people with disabilities can only write fiction with the help of generative AI, however, is apparently A-OK.)

    nothing about their initial statement implies that the poor and disabled need to or can only use AI. This sort of bad faith discourse irritates me. It’s a deliberate attempt to discredit those espousing an opposing opinion. It’s manipulative and intellectually dishonest.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    People are absolutely entitled to an opinion, but people are likewise absolutely entitled to disagree and/or disagree with it.

    Anyway people having an opinion is how regulations and rules get started.

    Don’t like regulations or controls? Tough shit. We live in an organized society consisting of large numbers of people. We’ve had thousands of years to figure this shit out, and one constant in all that time is that when you have a lot of people, you need a lot of rules.

    Tldr: People always have opinions and you can’t change that, so build a bridge and get over yourself.

  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    2 months ago

    I feel like you either fear and/or despise generative AI, or you think it’s the best thing since sliced bread.

    There seems to be very little in-between.

    • Humanius@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      edit-2
      2 months ago

      People who have a more in-the-middle opinion generally don’t talk about AI a lot. People with the most extreme opinions on something tend to be the most vocal about them.

      Personally I think it’s a neat technology, and there probably exist use-cases where it will work decently well. I don’t think it’ll be able to do everything and anything that the AI companies are promising right now, but there are certainly some tasks where an AI tool could help increase efficiency.
      There are also issues with the way the companies behind the Large Language Models are sourcing their training data, but that is not an inherent issue of the technology. It’s more an issue with incorrectly licensing the material.

      I’m just curious to see where it all goes.

      • Kusimulkku@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        It can do some neat stuff but to me it has been pretty disappointing. Some stuff it has explained with real clarity and made understanding some stuff really simple. Even explained things from a viewpoint I hadn’t considered before. But when I asked precise questions about things I know a lot about, it just outright confidently lied to me and told me I was wrong even though it had just moments before shared the hard facts it now contradicted fully. That kinda broke the spell and made me question everything the prompt returns to a degree that it’s hard to use it for anything serious.

        It does good summaries though and can concisely explain some simple stuff that I don’t need to be verified. Shows promise but as a serious research tool wrangling it to reveal when it is lying and making it see that it’s contradicting itself, that’s just more work than doing the research myself.

        • kennebel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I tried the bing chat (part of the work license), asked it some random questions, asked for more accurate information and pointed out the flawed answers it gave. It told me that I was being rude and ended the session. (smh)

    • Todd Bonzalez@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      I think a nuanced opinion is possible.

      I think that AI is a technological step forward with a lot of future applications that might be successful, but I also think it’s currently over-hyped and getting shoehorned into everything for dubious reasons.

      I think it’s problematic how AI companies are enriching themselves with other people’s content, but I also have serious disagreements with Intellectual Property law, and half-agree with those companies on the free use of information. I’m more forgiving of training AIs for research purposes rather than immediately monetizing models trained on other people’s content, likewise I am more supportive of openly licensed models you can download over proprietary models like ChatGPT.

      I think that AI generated writing and pictures are boring compared to the things human beings create, but I still find Generative AI software to be intriguing and have found entertainment in playing with various text and image models.

      I find AI Evangelists and AI Luddites to be equally annoying, because neither has a rational opinion, usually because neither of these groups actually knows anything technical about AI. The former will tell you that AI is already experiencing basic consciousness, the latter will tell you that AI is merely a buzzword and AIs are nothing but stupid token-guessing machines - the truth is a moving target somewhere in between.

    • TimeSquirrel@kbin.melroy.org
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      It’s a tool that has to be used in a specific way. I use it to help me program (it’s very, very good at pattern recognition and matching, which is all programming is, algorithmic patterns). Some idiots use it to do actual research on real world stuff and think it’s a replacement for a search engine. I don’t see it any differently than a screwdriver. Some people are just going to stab themselves in the ear with it. Can’t help that.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      2 months ago

      I feel like there’s a few extremely privileged people heavily “invested” in “AI” and the rest of us are stuck listening to their grifting BS.

    • irotsoma@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      2 months ago

      Or you realize it’s not “intelligent” like the marketing suggests and realize it is eating tons of resources at almost all companies by being incapable of accurately doing the things it’s being used for (mostly to replace employees). So you are waiting, impatiently, for the buzz to fade so that executives wasting time and money on it will allow that money to be spent on more substantial needs, like hiring people.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      7
      ·
      2 months ago

      The reasonable in-between is despising without presently fearing.

      GenAI is a plagiarism engine. That’s really not something that can be defended. But as a means of automating away the jobs of writers it has proven itself to be so deeply deficient that there’s very little to fear at this time.

      The arrival of these tools has, however, served as a wake up call to groups like the screenwriters guild, and I’m very glad that they’re getting proper rules in place now before these tools become “good enough” to start producing the kind of low grade verbal slurry that Hollywood will happily accept.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        2 months ago

        GenAI is a plagiarism engine. That’s really not something that can be defended.

        Human artists / writers take influence from others as well. Nobody is creating art in a vacuum and I don’t see generative AI much different from the way humans operate. I’d argue it’s virtually impossible to write a sentence that has not been written before and every new human-created art piece probably has a really close equivalance that has already been done before.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          2 months ago

          These things sound analogous if you know very little about how both generative AI and the human creative process actually work.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          I’d argue it’s virtually impossible to write a sentence that has not been written before

          I mean this sincerely: why bother getting excited about anything, then?

          A new Marvel movie, a new game, a new book, a new song. If none of them are unique in any way, what is the point of it all? Why have generative AI go through this song and dance? Why have people do it? Why waste everyone’s time?

          If the plagiarism engine is acceptable because it’s not possible to be unique anyway… I just, I don’t know how you go on living. It all sounds so unbelievably boring.

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 months ago

            Just because you’re using standard materials, it doesn’t mean you can’t combine them in a unique ways and even if every possible sentence has been said before, that doesn’t mean everyone has heard it before too.

            The point is that not being allowed to pull from existing content is an impossible standard. Nobody is being as original as they may think they are and even when you truly come up with an idea independently it’s highly likely that you’re not the first to think of that.

            Why do you find it such a depressing idea? I face this attitude often when discussing free will as well, which I genuinely don’t believe in either but that has zero effect on my ability to get excited or motivated about things.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              it doesn’t mean you can’t combine them in a unique ways

              Okay, so you don’t believe new things can’t be unique. You just think that plagiarism is when one person uses the word ‘the’ and then a second person uses the word ‘the’.

              Why do you find it such a depressing idea?

              That art is dead? Through sheer saturation alone, no one has anything left to say? That watching the new Cinderella is line-by-line the same as watching the old Cinderella, and the money machine keeps this corpse moving along only because people are too stupid to realize they’re being sold books from a library? I really don’t know how you couldn’t.

              This is like asking me why a polluted lake is sad.

              • ContrarianTrail@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                2 months ago

                I’ll ignore the first part as it doesn’t represent my view.

                I don’t think art is dead and I disagree with the implication that AI simply hands you a copy of something somebody else did before. That’s not how generative AI works. There would be nothing generative about that. Instead it studies the prior work of humans, finds patterns and combines these in unique and novel ways. If I ask for a hybrid of Harry Potter and Lord of the rings then obviously its using existing building blocks but the outcome is still something that has not been written before.

                I’m an artist myself. I take photographs. I’m under no illusion that all my photos are completely unique, they’re not. I’m well aware that if I had a database of every single picture ever taken, then there would hundreds if not even thousands of photos that are near identical to the ones I’ve taken and are so proud of. That takes zero joy out of my creative process and from the enjoyment other people find in my work. Nobody has seen every photo in the world. My art is not meaningless just because someone did it before me.

                • petrol_sniff_king@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 months ago

                  I was equivocating singular words and entire sentences on purpose.

                  If you can recombine sentences in interesting ways, into paragraphs that are your own ideas, that isn’t plagiarism. Why would “people can’t construct unique sentences either” be a rebuttal if that’s not what plagiarsm is?

                  Instead it studies the prior work of humans, finds patterns and combines these in unique and novel ways.

                  You’re anthropomorphising.

                  LLMs are little clink-clink machines that produce the most typical output. That’s how they’re trained. Ten thousand inputs say this image is of a streetlight? That’s how it knows.

                  The fact an LLM knows what a Lord of Rings is at all means that Tolkien’s words, the images, the sounds, are all encoded in its weights somewhere. You can’t see them, it’s a black box, but they live there.

                  Could you say the same of the human brain? Sure. I know what a neuron is.

                  But, LLMs are not people.

                  All of that is besides the point, though. I was just floored by how cynical you could be about your own supposed craft.

                  A photograph of, say, a pretty flower is fantastic. As an enjoyer of art myself, I love it when people communicate things. People can share in the beauty that you saw. They can talk about it. Talk about how the colors and the framing make them feel. But if you’re view is that you’re not actually adding anything, you’re just doing more of what already exists, I really don’t know why you bother.

                  Nobody has seen every photo in the world.

                  Okay, assume someone has. Is your art meaningless, then? All of photography is just spectacle, and all the spectacles have been seen?

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        2 months ago

        The plagiarism engine effect is exactly what you need for a good programming tool. Most problems you’re ever going to encounter are solved and GenAI becomes a very complex code autocomplete.

        An LLM constructed only out of open source data could do an excellent job as a tool in this capacity. No theft required.

        For writing prose it’s absolutely trash, and everyone using it for that purpose should feel ashamed.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          The plagiarism engine effect is exactly what you need for a good programming tool. Most problems you’re ever going to encounter are solved and GenAI becomes a very complex code autocomplete.

          In theory, yes, although having actually tried using genAI as a programming tool, thy actual results are deeply lacklustre at best. It sort of works, under the right circumstances, but only if you already know enough to confidently do the job yourself, at which point the value in having an AI do it for you, and then having to check the AI’s work for any of a million possible fuck ups, seems limited at best.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Yeah my usage of it is similarly limited. But the plagiarism engine is more useful than it is annoying in my experience. Especially in writing kdoc or unit test variations. Write one, write the name of the next, have autocomplete fill it out with the expected conditional variation

              • huginn@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Ish.

                You’ll have assertions that are entirely new or different, other pieces of setup or teardown. It really is one of the best use cases for GH’s Copilot that I’ve run across.

                In my day to day the intellij autocomplete is what I prefer.

                • Voroxpete@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Noted. I’ll have to play around with that sometime.

                  Despite my obvious stance as an AI skeptic, I have no problem with putting it to use in places where it can be used effectively (and ethically). I’ve just found that in practice, those uses are varnishingly few. I’m not on some noble quest to rid the world of computers, I just don’t like being sold overhyped crap.

                  I’m also hesitant to try to rebuild any part of my workflow around the current generation of these tools, when they obviously aren’t going to exist in a few years, or will exist but at an exorbitant price. The cost to run genAI is far, far higher than any entity (even Microsoft) has any willingness to sustain long term. We’re in the “give it away or make it super cheap to get everyone bought in” phase right now, but the enshittification will come hard and fast on this one, much sooner than anyone thinks. OpenAI are literally burning billions just in compute right now. It’s unsustainable. Short of some kind of magical innovation that brings those compute costs down a hundred or thousand fold, this isn’t going to stick around.

    • Jamyang@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      2 months ago

      Me? I am more of a centrist. I realize that AI has its benefits and drawbacks.

      • William@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        I don’t think they meant “you” you. They meant “you” in the general sense. They’re saying that people either love it or hate it, with not very many centrists.

        I’m not sure that’s true, though. I think, like you, most people are either centrist, or have no opinion at all. The vocal people go all one way or the other, though… Except you for some reason. :D

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      2 months ago

      No words can express how little interest I have in Bill Gates opinion on how I should write.

    • VeganCheesecake@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      At the moment, LLMs just aren’t very good at writing anything that is interesting. I experimented with it a bit for shits and giggles, and tried out several different local Models and online Services.

      I’m not saying that it’s impossible it’ll improve, but for me as someone who enjoys writing, having your writing done by a tool just misses the point. I like to write because it allows me to express myself, and off loading parts of that process to a tool makes it less personal, less me.

      I won’t judge anyone with a different opinion, but for me, part of the enjoyment of reading also comes from seeing how the author and their experiences colour their writing, which usage of such a tool, in a way, also diminishes. At the moment, I just can’t see an avenue to the prevalence of LLMs making creative writing better.