or is it just bean counters optimizing enshittification and monetization of a previously free product? oh its certainly the former bazinga

Unproven hypothesis seeks to explain ChatGPT’s seemingly new reluctance to do hard work.

In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more “lazy,” reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it’s an issue, but the company isn’t sure why. The answer may be what some are calling “winter break hypothesis.” While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.

On Monday, a developer named Rob Lynch announced on X that he had tested GPT-4 Turbo through the API over the weekend and found shorter completions when the model is fed a December date (4,086 characters) than when fed a May date (4,298 characters). Lynch claimed the results were statistically significant.

  • GinAndJuche [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    11 months ago

    found shorter completions when the model is fed a December date (4,086 characters) than when fed a May date (4,298 characters).

    Duh, the longer you let it run the more data it has. Why wouldn’t the newer version be better? /s