researchers conducted experimental surveys with more than 1,000 adults in the U.S. to evaluate the relationship between AI disclosure and consumer behavior

The findings consistently showed products described as using artificial intelligence were less popular

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,”

  • magic_lobster_party@kbin.run
    link
    fedilink
    arrow-up
    84
    ·
    3 months ago

    Using AI in the marketing is a sign you don’t have much else to show for. People see through this. Your product should be strong even without having to mention AI.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      38
      ·
      3 months ago

      Using AI in the marketing is a sign you don’t have much else to show for.

      To me it’s a sign that you have spyware included, will depend on a perfect network infrastructure, and will stop working in 2 years.

      But my guess is that for most people it’s a sign the product is made by some psychopath-like company like Facebook.

    • MonkderVierte@lemmy.ml
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      3 months ago

      That and LLM confidently making up “facts”. And since LLM is the AI with most direct exposure to the user, this is what happens.

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        7
        ·
        3 months ago

        I believe there’s use of LLMs beyond being “fact bots”. I see it more as a “universal text processor”. Like you already have a text, and you want to have it written in a different style or language. Or extract pieces of information from a text to something machine readable. Or maybe convert instructions in natural language to machine instructions.

        All the facts are at hand. It just converts the given information to something else.

        • Kissaki@programming.devOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          At work, we recently talked about AI. One use case mentioned (by an AI consulting firm, not us or actually suggested for us) was meeting summaries and extracting TODOs from them.

          My stance is that AI could be useful for summaries about topics so you can see what topics were being talked about. But I would never trust it with extracting the or all significant points, TODOs, or agreements. You still need humans to do that, and have explicit agreement and confirmation of the list in or after the meeting.

          It can also help to transcribe meetings. It could even translate them. Those things can be useful. But summarization should never be considered factual extraction of the significant points. Especially in a business context, or anything else where you actually care about being able to trust information.

          I wouldn’t [fully] trust it with transforming facts either. It can work where you can spot inaccuracies (long text, lots of context), or where you don’t care about them.

          Natural language instructions to machine instructions? I’d certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

          • magic_lobster_party@kbin.run
            link
            fedilink
            arrow-up
            2
            ·
            3 months ago

            Natural language instructions to machine instructions? I’d certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

            I’m imagining it to be quite limited. Mostly to talk with appliances in a way that’s more advanced than today. Instructions like “gradually dim down the lights in living room until bed time”, or “dim down the lights in the living room when the we watch a movie on TV”.