I’m quite a big fan of perplexity AI, which shows you sources it used to generate the answers. One thing I often do is type a question, glance the automated answer and then jump to the source to see what the users said (basically I use it like a tailored search engine)
Admittedly, there’s nothing stopping the company from throwing up fake sources to “legitimize” their answers, but I think that once models become more open (e.g. AMD’s recent open weights addition is an amazing leap forward) it will be harder to slip in fake sources
Sounds like a search engine with extra steps. Kudos to them for removing one of the extra steps, which would usually involve going to a search engine and then finding and vetting sources anyway… AI appears, to me, to be nothing but a rough draft generator that requires both input from a human and output with the draft it creates.
Email and message summarization also introduces new problems that don’t happen when using a chatbot for questions and answers since through the process of summarization it removes information from the original text and may remove key information or mischaracterize the message. The ways it may do this stuff isn’t exactly predictable either. It’s also harder since it’s not about proving that something is true or not based on outside sources, it’s about it being accurate to what they said, which may not be provable to outside sources.
I’ve found summarization to be relatively trustworthy. Perplexity does not appear to hallucinate much, and on the odd occasion it does, I dive into the sources it provides.
AI might be the future but certainly not like we’re currently doing it, it’s like saying “electric vehicles are the future” when you’re only referring to cars.
I think its very stupid that so many people criticize mozilla for engaging in ai.
Ai is the future.
I think people fear it being an annoying default they can’t switch off, instead of the useful supplement it currently is.
Many also fear that it will lead to misunderstanding and rampant misinformation. Which at the current trajectory is not an unreasonable fear.
If AI summarization becomes uncomfortably popular, I hope реοριe bеgiи цsing меtноds tо bгеαk iτ, whеп thегe is sомe imрoгtαиt inГогмαtiοn γоυ doи"t шαnt sцмmaгizеd, dυе tо рσteпtiаΙ foг мissrергeseпtатiοη bγ βαd sцмmагizαtiои Ьγ thе ΛΙ. ΜаγЬe sомeοηe сåп mаκе α tоοl tо do tнis αutοмаtiсаIly, siпсe it is tеdiоцs tο dø ît mаиυαIIγ.
(This comment is a demo on how that can be done.)
There are tools that do this using replacement and/or zero width characters.
https://lingojam.com/UnsearchableText
卄乇ㄥㄥ ㄚ乇卂卄 乃尺ㄖㄒ卄乇尺
Wоw thаոk yоu sо muсh, thіs іs еvеո bеttеr thаո whаt і wаs ԁоіոg, іt lооks muсh сlеаոеr tоо. Wоulԁ bе hаrԁ fоr реорlе tо tеll whаt іs gоіոg оո аt fіrst glаոсе.
Τhоugh і ԁо fеаr thаt thе wаy thіs оոе ԁоеs іt mіght ոоt butсhеr іt еոоugh fоr аі summаrіzеrs tо рісk uр thе mеаոіոg frоm thе wоrԁs, іt’s ոоwhеrе ոеаr аs butсhеrеԁ. Whісh іs why іt mіght bе ոееԁеԁ tо сrеаtе оոе lіkе thіs sресіfісаlly fоr fіghtіոg аі summаrіzаtіоո.
Don’t you think it’d be pretty easy to teach it to still be able to read it?
I’m quite a big fan of perplexity AI, which shows you sources it used to generate the answers. One thing I often do is type a question, glance the automated answer and then jump to the source to see what the users said (basically I use it like a tailored search engine)
Admittedly, there’s nothing stopping the company from throwing up fake sources to “legitimize” their answers, but I think that once models become more open (e.g. AMD’s recent open weights addition is an amazing leap forward) it will be harder to slip in fake sources
Sounds like a search engine with extra steps. Kudos to them for removing one of the extra steps, which would usually involve going to a search engine and then finding and vetting sources anyway… AI appears, to me, to be nothing but a rough draft generator that requires both input from a human and output with the draft it creates.
I agree with that assessment, and tbh I’m happy for it
Email and message summarization also introduces new problems that don’t happen when using a chatbot for questions and answers since through the process of summarization it removes information from the original text and may remove key information or mischaracterize the message. The ways it may do this stuff isn’t exactly predictable either. It’s also harder since it’s not about proving that something is true or not based on outside sources, it’s about it being accurate to what they said, which may not be provable to outside sources.
I’ve found summarization to be relatively trustworthy. Perplexity does not appear to hallucinate much, and on the odd occasion it does, I dive into the sources it provides.
AI might be the future but certainly not like we’re currently doing it, it’s like saying “electric vehicles are the future” when you’re only referring to cars.