OpenAI CPO Kevin Weil says their o1 model can now write legal briefs that previously were the domain of $1000/hour associates: "what does it mean when you can suddenly do $8000 of work in 5 minutes for $3 of API credits?" pic.twitter.com/MotT9Oo9rv— Tsarathustra (@tsarnick) October 19, 2024 OpenAI's Chief Product…
This headline certainly seems sensational, but I’ve also started seeing some really nice uses of LLMs cropping up. Some of the newer API features make them a lot more practical for development of things other than simple chat bots. It remains to be seen if the value delivered is worth the energy/data costs long term, but LLMs in general seems to be finding their feet in some ways.
Sure. I’m mainly basing my opinion on some more recent research (which I can’t find right now) that had some disheartening numbers on AI use in programming. As far as I remember it said at the end of the day it saves some time, but not a lot, but on the flipside the code that has been produced by programmers with help of AI has significantly more bugs in it. Which makes me doubt if it’s a good fit to replace professionals (at this time).
And secondly, the stock prices of companies like Nvidia tell us, some of the hot air in the AI bubble is escaping. I’d say things are calming down a bit, not accellerating.
Oh yeah, I’m talking about calling the LLM with code, not using the LLM to help write the code. They still suck at providing anything reliant on factual accuracy. What they are very good at is extracting meaning from text, e.g. taking a user’s natural language request and deciding what to do with it from a set of options.
This headline certainly seems sensational, but I’ve also started seeing some really nice uses of LLMs cropping up. Some of the newer API features make them a lot more practical for development of things other than simple chat bots. It remains to be seen if the value delivered is worth the energy/data costs long term, but LLMs in general seems to be finding their feet in some ways.
Sure. I’m mainly basing my opinion on some more recent research (which I can’t find right now) that had some disheartening numbers on AI use in programming. As far as I remember it said at the end of the day it saves some time, but not a lot, but on the flipside the code that has been produced by programmers with help of AI has significantly more bugs in it. Which makes me doubt if it’s a good fit to replace professionals (at this time).
And secondly, the stock prices of companies like Nvidia tell us, some of the hot air in the AI bubble is escaping. I’d say things are calming down a bit, not accellerating.
And regarding law, there is this funny story from a bit ago: https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/
Well, maybe funny for everyone except that lawyer and his client. And science hasn’t made fundamental progress on hallucinations since then. I’d say it’s going to start replacing professionals once we get that solved. And that’ll be when AI will become massively useful.
And of course it’s already very useful within some more narrow use cases.
Oh yeah, I’m talking about calling the LLM with code, not using the LLM to help write the code. They still suck at providing anything reliant on factual accuracy. What they are very good at is extracting meaning from text, e.g. taking a user’s natural language request and deciding what to do with it from a set of options.
Sure. I believe that’s called “intent classification” and has been around in natural language processing for quite some time.