Much greater than now, given the open discoverability of the original post here, versus the walled-off content we have today, locked away in discord servers and the like.
Furthermore, the act of replying to that post will have bumped it right back to the top for everyone to see.
I agree with this. We are much missing these forums with civil replies and clouded behind "influencer" culture, which is optimized for incentives. Pure discussions as in this example are such a stalwarts of open web.
On the other hand, small websites and forums can disappear but that openness allows platform like archive.org to capture and "fossilize" them.
These forums still exist. Typically with much older and mature discussions, as the users have aged alongside the forums. Nothing is stopping you from joining them now.
My Something Awful forums account is over 25 years old at this point. The software and standards and moderation style is approximately unchanged, complete with 10 dollar sign-up fee to keep out the spam.
A model or new model version X is released, everyone is really impressed.
3 months later, "Did they nerf X?"
It's been this way since the original chatGPT release.
The answer is typically no, it's just your expectations have risen. What was previously mind-blowing improvement is now expected, and any mis-steps feel amplified.
This is not always true. LLMs do get nerfed, and quite regularly, usually because they discover that users are using them more than expected, because of user abuse or simply because it attract a larger user base. One of the recent nerfs is the Gemini context window, drastically reduced.
What we need is an open and independent way of testing LLMs and stricter regulation on the disclosure of a product change when it is paid under a subscription or prepaid plan.
Unfortunately, it's paywalled most of the historical data since I last looked at it, but interesting that opus has dipped below sonnet on overall performance.
Interesting! I was just thinking about pinging the creator of simple-bench.com and asking them if they intend to re-benchmark models after 3 months. I've noticed, in particular, Gemini models dramatically reducing in quality after the initial hype cycle. Gemini 3 Pro _was_ my top performer and has slowly reduced to 'is it worth asking', complete with gpt-4o style glazing. It's been frustrating. I had been working on a very custom benchmark and over the course of it Gemini 3 Pro and Flash both started underperforming by 20% or more. I wondered if I had subtle broken my benchmark but ultimately started seeing the same behavior in general online queries (Google AI Studio).
> What we need is an open and independent way of testing LLMs
I mean, that's part of the problem: as far as I know, no claim of "this model has gotten worse since release!" has ever been validated by benchmarks. Obviously benchmarking models is an extremely hard problem, and you can try and make the case that the regressions aren't being captured by the benchmarks somehow, but until we have a repeatable benchmark which shows the regression, none of these companies are going to give you a refund based on your vibes.
We've got a lot of available benchmarks & modifying at least some of those benchmarks doesn't seem particularly difficult: https://arc.markbarney.net/re-arc
To reduce cost & maintain credibility, we could have the benchmarks run through a public CI system.
I usually agree with this. But I am using the same workflows and skills that were a breeze for Claude, but are causing it to run in cycles and require intervention.
This is not the same thing as a "omg vibes are off", it's reproducible, I am using the same prompts and files, and getting way worse results than any other model.
Also people who were lucky and had lots of success early on but then start to run into the actual problems of LLMs will experience that as "It was good and then it got worse" even when it didn't actually.
If LLMs have a 90% chance of working, there will be some who have only success and some who have only failure.
People are really failing to understand the probabilistic nature of all of this.
"You have a radically different experience with the same model" is perfectly possible with less than hundreds of thousands of interactions, even when you both interact in comparable ways.
Opus was a non-deterministic probability machine in the past, present and the foreseeable future. The variance eventually shows up when you push it hard.
Eh, I've definitely had issues where Claude can no longer easily do what it's previously done. That's with constant documenting things in appropriate markdown files well and resetting context here and there to keep confusion minimal.
Indeed, increasing the incentive for companies to reject ( and then sometimes silently fix anyway ) even the valid reports would only increase further misery for everyone.
This makes me think LLMs would be interesting to set up in a game of Diplomacy, which is an entirely text-based game which soft rather than hard requires a degree of backstabbing to win.
The findings in this game that the "thinking" model never did thinking seems odd, does the model not always show it's thinking steps? It seems bizarre that it wouldn't once reach for that tool when it must be being bombarded with seemingly contradictory information from other players.
Reading more I'm a little disappointed that the write-up has seemingly leant so heavily on LLMs too, because it detracts credibility from the study itself.
Fair point. The core simulation and data collection was done programmatically - 162 games, raw logs, win rates. The analysis of gaslighting phrases and patterns was human-reviewed. I used LLMs to help with the landing page copy, which I should probably disclose more clearly. The underlying data and methodology is solid, you can check it here: https://github.com/lout33/so-long-sucker
There was one much more successful EV, although it too was niche: The UK had "perhaps 40,000 milk floats" in the 1970s and 1980s before supermarkets took over as primary milk distributors. ( https://zavanak.com/transport-topics/british-electric-cv-his... )
You jest, but times around the Earth is the actual origin of the Meter. Kinda.
The history is quite interesting and well worth checking out.
I can't recommend a book on the subject, but I do heartily recommend "Longitude", which is about the challenges of inventing the first maritime chronometers for the purpose of accurately measuring longitude.
That's kind of my point, ISPs use that max speed in their advertising when it isn't really relevant, especially if it hits your cap in a minute or two.
It is relevant, though. I have 1.2 Gbps down with a 2 TB monthly cap. I've never hit the monthly cap even once, but by your standard I have "1.2 Gbps down for 3 hours, 42 minutes".
But that doesn't change the reality that it matters to me that a 20 GB video that a friend took at my wedding downloads in just 2 minutes rather than the ~30 minutes it would take if I had a 100 Mbps connection.
Furthermore, the act of replying to that post will have bumped it right back to the top for everyone to see.
reply