The chart hilariously reveals that GPT-5 scores a whopping 74.9% accuracy on software engineering benchmarks, but the pink bars tell the real story – 52.8% of that is achieved "without thinking" while only a tiny sliver comes from actual "thinking." Meanwhile, OpenAI's o3 and GPT-4o trail behind with 69.1% and 30.8% respectively, with apparently zero thinking involved. It's basically saying these AI models are just regurgitating patterns rather than performing actual reasoning. The perfect metaphor for when your code works but you have absolutely no idea why.
SWE-Bench Verified: Thinking Optional
8 months ago
477,870 views
1 shares
ai-memes, machine-learning-memes, gpt-memes, benchmarks-memes, software-engineering-memes | ProgrammerHumor.io
More Like This
Can A Robot Take Your Job?
5 months ago
251.9K views
0 shares
The Two Faces Of LLM Generated Code
11 months ago
352.9K views
0 shares
Its Artificial Alright
22 days ago
334.8K views
0 shares
Pick Your Enchanted PC
11 months ago
345.9K views
0 shares
Who Needs A Brain When You Have AI?
11 months ago
268.8K views
0 shares
Artery8 Binary Do You Even Code Neon Geek Nerd Humour Quote Art Print Canvas Premium Wall Decor Poster Mural
Affiliate
Wall Art
Artery8
Loading more content...
AI
AWS
Agile
Algorithms
Android
Apple
Bash
C++
Csharp