cross-posted from: https://lemmy.world/post/19416727

Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

  • Doombot1@lemmy.one
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    2 months ago

    Every way, except for speed… which is all most people give a crap about

  • Linktank@lemmy.today
    link
    fedilink
    arrow-up
    5
    arrow-down
    21
    ·
    2 months ago

    AI is basically still in its infancy. The fact that this is in question goes to show that it’s already close to surpassing us.

    • don@lemm.ee
      link
      fedilink
      arrow-up
      12
      arrow-down
      3
      ·
      2 months ago

      How does “worse than humans at summarizing” = “gonna reach singularity here rfq”?

      • sazey@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        3
        ·
        2 months ago

        it makes complete sense if you huff copious quantities of hopium first

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        11
        ·
        edit-2
        2 months ago

        It scored a 47% vs the humans 81%

        That’s already pretty compelling that it’s scoring so well considering how this tech is still so new.

        It’s very rapidly catching up.

        Also keep in mind they were using a more generic ML model and not one specifically tuned to this task.

      • Linktank@lemmy.today
        link
        fedilink
        arrow-up
        3
        arrow-down
        12
        ·
        edit-2
        2 months ago

        Oh did I say that somewhere in my comment? Please point out to me the part where I mentioned singularity.

        Fucking Dons man, every one I have ever met has been some kind of dick.

        • don@lemm.ee
          link
          fedilink
          arrow-up
          7
          arrow-down
          3
          ·
          edit-2
          2 months ago

          You still didn’t answer my question. I asked how being shit at summarizing means AI’s going to surpass us.

          Also, read your own comment. You never mentioned singularity, I did. It’s a common term used to refer to AI passing us, which you did mention as happening soon, IYO.

          • Linktank@lemmy.today
            link
            fedilink
            arrow-up
            1
            arrow-down
            11
            ·
            2 months ago

            Did you ever watch the videos of the robots learning to walk? We’re at that stage right now with summarizing. Pretty soon they’ll be dancing and jumping at that too.

            If you fail to understand how progress works then I don’t think there’s an explanation I can offer you.

            • don@lemm.ee
              link
              fedilink
              arrow-up
              8
              arrow-down
              1
              ·
              2 months ago

              Are you deliberately ignoring the point article? If AI is worse at summarizing than a human, then it hasn’t gotten to the point of summarizing better than a human. It’s gotten to the point of being able to fail at it worse than humans. It will have passed summarizing when it’s at least on par with the average human.

              I have seen many videos of robots trying to walk and often failing, while the average human baby consistently learned to do it faster than the robots, who still failed. Your position seems to be “Any progress AI programmers make means AI’s gonna overtake us really soon!”

              There’s such a thing as negative progress, and if these are your best examples of progress, then I don’t think you’re capable of giving an effective explanation of the concept to begin with.

              • Linktank@lemmy.today
                link
                fedilink
                arrow-up
                1
                arrow-down
                11
                ·
                2 months ago

                I don’t need to convince you of anything. I also don’t really care if you think it will happen soon or at all.

                I’m impressed with the progress so far and I believe that models will become available that will be able to do a far better job than an average human in a relatively short period of time. That being the next decade or two.

                Your defeatist whiney argumentative comments aside.

    • MTK@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      Ai products are in their infancy, AI models are not and as far as we can tell, LLMs might be stuck already.