Found the daily LLM denial thread.
That tracks with my experience
You have to very carefully scope things for them and have a plan for when they inevitably screw up.
They’re great for bootstrapping in my experience but then really fall apart when you need it to do something surgical on a larger codebase.
Mine too
I’ve been working on an app and it was fantastic for the basics, then I decided to refactor an API and Claude code would run for hours without really getting there.
Also a good warning: I just had to completely rewrite an mcp server I had Claude build because when I needed to update it, the whole server was one giant if/else statement and utterly unmaintainable.
Yeah I was trying to pull out a nested react component and styles out of a larger component that got to be almost 1500 lines. Claude and GPT both struggled to get down what styles were required and what that subcomponent was actually doing. And generating tests around just made a fuck ton of spaghetti.
Which is fine. LLMs don’t have to be great at everything. But it’d be nice if people stopped saying I’m gonna be out of a job because of em.
Also a good warning: I just had to completely rewrite an mcp server I had Claude build because when I needed to update it, the whole server was one giant if/else statement and utterly unmaintainable.
I’ve noticed that in some of my bootstrapped code (also an MCP server :) ). I think it tends to bias towards single file solutions so it tends to be a lot less maintainable.
I use LLMs for one thing only, turn my own ADHD ideas into something others can understand.
I use it to role play historical counter factuals, like how I could win the battle of Cannae through tactics, or how I could invent the telegraph in 13th century France. It’s worth every watt <3
Wait a second this is brilliant! You can roleplay like a general in any history battles and see if you can do it differently!
It’s easy to forget how fucking sci-fi the existence of these models is. I’m kind of excited to see where agent frameworks are in five years time, as well as a bit apprehensive…
We clearly read very different stories, in mine the computers are usually more competent than a 30% success rate.
Imagine if the internet at its inception failed to connect you 70% of the time. It’s not as impressive as most other inventions.
Don’t have to imagine it when you can just remember it. Getting online in the late 90s was a horror show, seriously dialup was super unreliable. And that was 20 years after it’s inception, it was shit but also extremely popular.
Similar to the crypto hype. Adoption is imminent, bro. Just a few more months, bro. Please, bro
Adoption is actually already there. The problem at the moment is getting people to pay for it because currently they lose money on each prompt even for paying users.
Heh. Dial-up bbs, internet, and the like were fairly unstable way back when, not to mention expensive if you weren’t at a university. It’s come a long way, and I imagine artificial intelligence will as well. My main point was that even a 66% failure rate on complex real-world tasks didn’t seem possible even this century, just a few years ago. Transformers with attention really were a game changer in AI, and you have to be preternaturally blasé to ignore that. The problem, especially around here, has been how it’s sold (and to some extent that it’s sold at all), and the bubble that the hype has formed. I don’t disagree too much with that, I just think it’s a shame that it overshadows the very exciting and slightly scary tech at the bottom of the hype well, and leads to people dismissing it as advanced autocomplete, when it’s clearly something of a different degree.