While I’m never excited about these general uses, it seems like they did a reasonably good job with this experiment. Hopefully other Dept’s don’t just loosely ‘throw it in’…

Some tidbits:

The AI operated on a fixed dataset. It did not collect information, nor did it tap into the main client record systems, so privacy risks were low.

It did not learn from the queries staff made or the information they used with it, and did not add that information to its learning banks, the reports said.

The two tests - first with 25 staff, then with 300 - found that along with boosts to service came gains in employee wellbeing, such as helping people with ADHD or poor hearing focus more in meetings, or those with dyslexia to revise content.

  • makingStuffForFun@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    2 months ago

    So then what happens to the privacy aspect once they fully go live? They are saying that it was not invasive due to the sample being restricted, but if they go live, the private data will no longer be restricted.

    • TagMeInSkipIGotThis@lemmy.nz
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Note the privacy risks were low, not no. So even this limited, safe trial has privacy risk which has been exposed to an AI processing & storing data where?

  • MadMonkey@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    Further info found from LinkedIn;

    Extensive foundation setting had occurred to ensure appropriate and safe use.

    With permission I use copilot/gpt4 for my master degree, and my goodness it helps significantly with getting started with an assignment. Yes, it requires polishing, fact checking etc. But I really resonate with the inertia or starting from scratch issue.

    Would love to see a cost benefit analysis though - obv. Microsoft is out to earn a profit, so wonder if this is a win win situation or not.