I think AI is just a massive force multiplier. If your codebase has bad foundation and going in the wrong direction with lots of hacks, it will just write code which mirrors the existing style... And you get exactly was OP is suggesting.
If however, your code foundations are good and highly consistent and never allow hacks, then the AI will maintain that clean style and it becomes shockingly good; in this case, the prompting barely even matters. The code foundation is everything.
But I understand why a lot of people are still having a poor experience. Most codebases are bad. They work (within very rigid constraints, in very specific environments) but they're unmaintainable and very difficult to extend; require hacks on top of hacks. Each new feature essentially requires a minor or major refactoring; requiring more and more scattered code changes as everything is interdependent (tight coupling, low cohesion). Productivity just grinds to a slow crawl and you need 100 engineers to do what previously could have been done with just 1. This is not a new effect. It's just much more obvious now with AI.
I've been saying this for years but I think too few engineers had actually built complex projects on their own to understand this effect. There's a parallel with building architecture; you are constrained by the foundation of the building. If you designed the foundation for a regular single storey house, you can't change your mind half-way through the construction process to build a 20-storey skyscraper. That said, if your foundation is good enough to support a 100 storey skyscraper, then you can build almost anything you want on top.
My perspective is if you want to empower people to vibe code, you need to give them really strong foundations to work on top of. There will still be limitations but they'll be able to go much further.
My experience is; the more planning and intelligence goes into the foundation, the less intelligence and planning is required for the actual construction.
The wrinkle is that the AI doesn't have a truly global view, and so it slowly degrades even good structure, especially if run without human feedback and review. But you're right that good structure really helps.
That’s not what Claude and Codex put there when you ask them to init it. Also, the global view is most definitely bigger than their tiny, loremipsum-on-steroids, context so what do you do then?
You know you can put anything there, not just what they init, right? And you can reference other doc files.
I should probably stop commenting on AI posts because when I try to help others get the most out of agents I usually just get down voted like now. People want to hate on AI, not learn how to use it.
This is what I’ve discovered as well. I’ve been working on refactoring a massive hunk of really poor quality contractor code, and Codex originally made poor and very local fixes/changes.
After rearchitecting the foundations (dumping bootstrap, building easy-to-use form fields, fixing hardcoded role references 1,2,3…, consolidating typescript types, etc.) it makes much better choices without needing specific guidance.
Codex/Claude Code won’t solve all your problems though. You really need to take some time to understand the codebase and fixing the core abstractions before you set it loose. Otherwise, it just stacks garbage on garbage and gets stuck patching and won’t actually fix the core issues unless instructed.
Then you are boned unless it was architected well. LLMs tend to stack a lot of complexity at local scopes, especially if the neighboring pages are also built poorly.
E.g pumping out a ton of logic to convert one data structure to another. Like a poorly structured form with random form control names that don’t match to the DTO. Or single properties for each form control which are then individually plugged into the request DTO.
Must be my lucky day! Too bad my dream of being that while the bots are taking care of the coding is still sort of fiction.
I love a future when this is possible but what we have today is more of a proof of concept. A transformative leap is required for this technology before it can be as useful as advertised.
We’ve always been responsible for design and validation. Nothing has changed there.
It’s funny how the vibe coding story insists we shouldn’t look at the code details but when it’s pointed out the bots can’t deal with a “messy” (but validated) foundation, the story changes that we have to refactor that.
A tangent, I keep hearing this good base, but I've never seen one, not in the real world.
No projects, unless it's only you working on it, only yourself as the client, and is so rigid in it's scope, it's frankly useless, will have this mythical base. Over time the needs change, there's no sticking to the plan. Often it's a change that requires rethinking a major part. What we loathe as tight coupling was just efficient code with the original requirements.
Then it becomes a time/opportunity cost vs quality loss comparison. Time and opportunity always wins. Why?
Because we live in a world run by humans, who are messy and never sticks to the plan. Our real world systems (bureaucracy , government process, the list goes on) are never fully automated and always leaves gaps for humans to intervene. There's always a special case, an exception.
Perfectly architected code vs code that does the thing have no real world difference. Long term maintainability? Your code doesn't run in a vaccum, it depends on other things, it's output is depended on by other things. Change is real, entropy is real. Even you yourself, you perfect programmer who writes perfect code will succumb eventually and think back on all this with regret. Because you yourself had to choose between time/opportunity vs your ideals and you chose wrong.
Well-architected code should actually be easy to change wrt. new requirements. The point of keeping the architecture clean while you do this (which will typically require refactoring) is to make future changes similarly viable. In a world run by messy humans, accumulating technical debt is even more of a liability.
Can the AI help with refactoring a poor codebase? Can it at least provide good suggestions for improvement if asked to broadly survey a design that happens to be substandard? Most codebases are quite bad as you say, so this is a rather critical area.
my exact experience, and AI is especially fragile when you are starting new project from scratch.
Right know I'm building NNTP client for macOS (with AppKit), because why not, and initially I had to very carefully plan and prompt what AI has to do, otherwise it would go insane (integration tests are must).
Right know I have read-only mode ready and its very easy to build stuff on top of it.
> On a personal project, I asked an AI agent to add a test to a specific file. The file was 500 lines before the request and 100 lines after. I asked why it deleted all the other content. It said it didn't. Then it said the file didn't exist before. I showed it the git history and it apologised, said it should have checked whether the file existed first.
Ha! Yesterday an agent deleted the plan file after I told it to "forget about it" (as in, leave it alone).
I vibe coded a retro emulator and assembler with tests. Prompts were minimal and I got really great results (Gemini 3). I tried vibe coding the tricky proprietary part of an app I worked on a few years ago; highly technical domain (yes vague don’t care to dox myself). Lots of prompting and didn’t get close.
There are literally thousands of retro emulators on github. What I was trying to do had zero examples on GitHub. My take away is obvious as of now. Some stuff is easy some not at all.
I call these "embarrassingly solved problems". There are plenty of examples of emulators on GitHub, therefore emulators exist in the latent spaces of LLMs. You can have them spit one out whenever you want. It's embarrassingly solved.
In a way it shows how poorly we have done over the years in general as programmers in making solved problems easily accessible instead of constantly reinventing the wheel. I don't know if AI is coming up with anything really novel (yet) but it's certainly a nice database of solved problems.
I just hope we don't all start relying on current[1] AI so much that we lose the ability to solve novel problems ourselves.
[1] (I say "current" AI because some new paradigm may well surpass us completely, but that's a whole different future to contemplate)
Its license washing. The code is great because its already a problem solved by someone else. The AI can spit out the solution with no license and no attribution and somehow its legal. I hope American tech legislation holds that same energy once others start taking American IP and spitting it back out with no license or attribution.
I tried to vibe code a technical not so popular niche and failed. Then I broke down the problem as much as I could and presented the problem in clearer terms and Gemini provided working code in just a few attempts. I know this is an anecdote, but try to break down the problem you have in simpler terms and it may work. Niche industry specific frameworks are a little difficult to work with in vibe code mode. But if you put in a little effort, AI seems to be faster than writing code all on your own.
> I know this is an anecdote, but try to break down the problem you have in simpler terms
This should be the first thing you try. Something to keep in mind is that AI is just a tool for munging long strings of text. It's not really intelligent and it doesn't have a crystal ball.
It's called problem decomposition and agentic coding systems do some of this by themselves now: generate a plan, break the tasks into subgoals, implement first subgoal, test if it works, continue.
That's nice if it works, but why not look at the plan yourself before you let the AI have its go at it? Especially for more complex work where fiddly details can be highly relevant. AI is no good at dealing with fiddly.
Also re: "I spent longer arguing with the agent and recovering the file than I would have spent writing the test myself."
In my humble experience arguing with an LLM is a waste of time, and no-one should be spending time recovering files. Just do small changes one at a time, commit when you get something working, and discard your changes and try again if it doesn't.
I don't think AI is a panacea, it's just knowing when it's the right tool for the job and when it isn't.
Anyone not using version control or a IDE that will keep previous versions for a easy jump back is just being silly. If you're going to play with a kid who has a gun, wear your plates.
I don’t think it’s “just” that easy. AI can be great at generating unit tests but it can and will also frequently silently hack said tests to make them pass rather than using them as good indicators of what the program is supposed to be doing.
People need to consider / realize that the vast majority of source code training data is Github, Gitlab, and essentially the huge sea of started, maybe completed, student and open source project. That large body of source code is for the most part unused, untested, and unsuccessful software of unknown quality. That source code is AI's majority training data, and an AI model in training has no idea what is quality software and what is "bad" software. That means the average source code generated by AI not necessarily good software. Considering it is an average of algorithms, it's surprising generated code runs at all. But then again, generating compiling code is actually trainable, so what is generated can receive extra training support. However, that does not improve the quality of the source code training data, just the fact that it will compile.
I don't think it makes any part harder. What it does do is expose what people have ignored their whole career: the hard part. The last 15 years of software development has been 'human vibe coding'; copy+pasting snippets from SO without understanding them, no planning, constant rearchitecting, shipping code to prod as long as it runs on your laptop. Now that the AI is doing it, suddenly people want to plan their work and enforce tests? Seems like a win-win to me. Even if it slows down development, that would be a win, because the result is enforcement of better quality.
I'm working on a paper connecting articulatory phonology to soliton physics. Speech gestures survive coarticulatory overlap the same way solitons survive collision. The nonlinear dynamics already in the phonetics literature are structurally identical to soliton equations. Nobody noticed because these fields don't share conferences.
The article's easy/hard distinction is right but the ceiling for "hard" is too low. The actually hard thing AI enables isn't better timezone bug investigation LOL! It's working across disciplinary boundaries no single human can straddle.
Yep it is why the work getting over the threshold is just as long as it was without AI.
Someone mentioned it is a force multiplier I don't disagree with this, it is a force multiplier in the mundane and ordinary execution of tasks. Complex ones get harder and hard for it where humans visualize the final result where AI can't. It is predicting from input but it can't know the destination output if the destination isn't part of the input.
Totally agree on ai assisted coding resulting in randomly changed code. Sometimes it’s subtle and other times entire methods are removed. I have moved back to just using a JetBrains IDE and coping files in to Gemini so that I can limit context. Then I use the IDE to inspect changes in a git diff, regression test everything, and after all that, commit.
Some time back, my manager at the time, who shall remain nameless told the group that having AI is like having 10 people work for you ( he actually had a slightly smaller number, but it was said almost word for word like in the article ) with the expectation being set as: 'you should now be able to do 10x as much'.
Needless to say, he was wrong and gently corrected over the course of time. In his defense, his use cases for LLMs at the time were summarizing emails in his email client.. so..eh.. not exactly much to draw realistic experience from.
I hate to say it, but maybe nvidia CEO is actually right for once. We have a 'new smart' coming to our world. The type of a person that can move between worlds of coding, management, projects and CEOing with relative ease and translate between those worlds.
> his use cases for LLMs at the time were summarizing emails in his email client
Sounds just like my manager. Though he never has made a proclamation that this meant developers should be 10x as productive or anything along those lines. On the contrary, when I made a joke about LLMs being able to replace managers before they get anywhere near replacing developers, he nearly hyperventilated. Not because he didn't believe me, but because he did, and already been thinking that exact thought.
My conclusion so far is that if we get LLMs capable of replacing developers, then by extension we will have replaced a lot of other people first. And when people make jokes like "should have gone into a trade, can't replace that with AI" I think they should be a little more introspective; all the people who aspired to be developers but got kicked out by LLMs will be perfectly able to pivot to trades, and the barrier to entry is low. AI is going to be disruptive across the board.
I have half-jokingly talked about getting management, CEOs and board members replaced by LLMs. After all, at the very least, they are actually tested to ensure they do have guardrails to not do anything illegal and to shy away from unethical activities.
Daily agentic user here, and to me the problem here is the very notion of "vibe coding". If you're even thinking in those terms - this idea that never looking at the code has become a goal unto itself - then IMO you're doing LLM-assisted development wrong.
This is very much a hot take, but I believe that Claude Code and its yolo peers are an expensive party trick that gives people who aren't deep into this stuff an artificially negative impression of tools that can absolutely be used in a responsible, hugely productive way.
Seriously, every time I hear anecdotes about CC doing the sorts of things the author describes, I wonder why the hell anyone is expecting more than quick prototypes from an LLM running in a loop with no intervention from an experienced human developer.
Vibe coding is riding your bike really fast with your hands off the handles. It's sort of fun and feels a bit rebellious. But nobody who is really good at cycling is talking about how they've fully transitioned to riding without touching the handles, because that would be completely stupid.
We should feel the same way about vibe coding.
Meanwhile, if you load up Cursor and break your application development into bite sized chunks, and then work through those chunks in a sane order using as many Plan -> Agent -> Debug conversations with Opus 4.5 (Thinking) as needed, you too will obtain the mythical productivity multipliers you keep accusing us of hallucinating.
It's pretty difficult to say what it's going to be three months from now. A few months ago Gemini 2.x in IDEA and related IDEs had to be dragged through coding tasks and would create dumb build time errors on its way to making buggy code.
Gemini in Antigravity today is pretty interesting, to the point where it's worth experimenting with vague prompts just to see what it comes up with.
Coding agents are not going to just change coding. They make a lot of detailed product management work obsolete and smaller team sizes will make it imperative to reread the agile manifesto and and discard scrum dogma.
If the "hard part" is writing a detailed spec for the code you're about to commit to the project, AI can actually help you with that if you tell it to. You just can't skip that part of the work altogether and cede all control to a runaway slop generator.
The pattern matching and absence or real thinking is still strong.
Tried to move some excel generation logic from epplus to closedxml library.
ClosedXml has basically the same API so the conversion was successful. Not a one-shot but relatively easy with a few manual edits.
But closedxml has no batch operations (like apply style to the entire column): the api is there but internal implementation is on cell after cell basis. So if you have 10k rows and 50 columns every style update is a slow operaton.
Naturally, told all about this to codex 5.3 max thinking level. The fucker still succumbed to range updates here and there.
Told it explicitly to make a style cache and reuse styles on cells on same y axis.
5-6 attempts — fucker still tried ranges here and there. Because that is what is usually done.
It seems like a big part of the divide is that people who learned software engineering find vibe coding to be unsuitable for any project intended to be in use for more than a few while those who learned coding think vibe coding is the next big thing because they never have to deal with the consequences of the bad code.
Yes. If you have some experience, you know that writing code is a small part of the job, and a much bigger chunk is anticipating and/or dealing with problems.
People seem to think engineers like "clean code" because we like to be fancy and show off.
Nah, it's clean like a construction site. I need to be able to get the cranes and the heavy machinery in and know where all the buried utilities are. I can't do that if people just build random sheds everywhere and dump their equipment and materials where they are.
Training is the process of regressing to the mean with respect to the given data. It's no surprise that it wears away sharp corners and inappropriately fills recesses of collective knowledge in the act of its reproduction.
as usual the last 20% need 80% and the other 80% need 20% but my god did Ai make my bs corpo easy repeatable shit work like skimming docs writing summaries, skimming jira confluence and so on actually easier and for 90% of bs crud app changes the first draft is also already pretty good tbh I don't write hard/difficult code more then once a week/month.
I've seen some discussions and I'd say there's lots of people who are really against the hyped expectations from the AI marketing materials, not necessarily against the AI itself. Things that people are against that would seem to be against AI, but are not directly against AI itself:
- Being forced to use AI at work
- Being told you need to be 2x, 5x or 10x more efficient now
- Seeing your coworkers fired
- Seeing hiring freeze because business think no more devs are needed
- Seeing business people make a mock UI with AI and boasting how programming is easy
- Seeing those people ask you to deliver in impossible timelines
- Frontend people hearing from backend how their job is useless now
- Backend people hearing from ML Engineers how their job is useless now
- etc
When I dig a bit about this "anti-AI" trend I find it's one of those and not actually against the AI itself.
The most credible argument against AI is really the expense involved in querying frontier models. If you want to strengthen the case for AI-assisted coding, try to come up with ways of doing that effectively with a cheap "mini"-class model, or even something that runs locally. "You can spend $20k in tokens and have AI write a full C compiler in a week!" is not a very sensible argument for anything.
The cost argument is a fallacy, because right now, either you have a trained human in the loop, or the model inevitably creates a mess.
But regardless, services are extremely cheap right now, to the point where every single company involved in generative AI are losing billions. Let’s see what happens when prices go up 10x.
If you keep digging, you will also find that there's a small but vocal sock puppet army who will doggedly insist that any claims to productivity gains are in fact just hallucinations by people who must not be talented enough developers to know the difference.
It's exhausting.
There are legitimate and nuanced conversations that we should be having! For example, one entirely legitimate critique is that LLMs do not tell LLM users that they are using libraries who are seeking sponsorship. This is something we could be proactive about fixing in a tangible way. Frankly, I'd be thrilled if agents could present a list of projects that we could consider clicking a button to toss a few bucks to. That would be awesome.
But instead, it's just the same tired arguments about how LLMs are only capable of regurgitating what's been scraped and that we're stupid and lazy for trusting them to do anything real.
> I wonder if the people who are against it haven't even used it properly.
I swear this is the reason people are against AI output (there are genuine reasons to be against AI without using it: environmental impact, hardware prices, social/copyright issues, CSAM (like X/Grok))
It feels like a lot of people hear the negatives, and try it and are cynical of the result. Things like 2 r's in Strawberry and the 6-10 fingers on one hand led to multiple misinterpretations of the actual AI benefit: "Oh, if AI can't even count the number of letters in a word, then all its answers are incorrect" is simply not true.
I'm similarly bemused by those who don't understand where the anti-AI sentiment could come from, and "they must be doing it wrong" should usually be a bit of a "code smell". (Not to mention that I don't believe this post addresses any of the concrete concerns the article calls out, and makes it sound like much more of a strawman than it was to my reading.)
To preempt that on my end, and emphasize I'm not saying "it's useless" so much as "I think there's some truth to what the OP says", as I'm typing this I'm finishing up a 90% LLM coded tool to automate a regular process I have to do for work, and it's been a very successful experience.
From my perspective, a tool (LLMs) has more impact than how you yourself directly use it. We talk a lot about pits of success and pits of failure from a code and product architecture standpoint, and right now, as you acknowledge yourself in the last sentence, there's a big footgun waiting for any dev who turns their head off too hard. In my mind, _this is the hard part_ of engineering; keeping a codebase structured, guardrailed, well constrained, even with many contributors over a long period of time. I do think LLMs make this harder, since they make writing code "cheaper" but not necessarily "safer", which flies in the face of mantras such as "the best line of code is the one you don't need to write." (I do feel the article brushes against this where it nods to trust, growth, and ownership) This is not a hypothetical as well, but something I've already seen in practice in a professional context, and I don't think we've figured out silver bullets for yet.
While I could also gesture at some patterns I've seen where there's a level of semantic complexity these models simply can't handle at the moment, and no matter how well architected you make a codebase after N million lines you WILL be above that threshold, even that is less of a concern in my mind than the former pattern. (And again the article touches on this re: vibe coding having a ceiling, but I think if anything they weaken their argument by limiting it to vibe coding.)
To take a bit of a tangent with this comment though: I have come to agree with a post I saw a few months back, that at this point LLMs have become this cycle's tech-religious-war, and it's very hard to have evenhanded debate in that context, and as a sister post calls out, I also suspect this is where some of the distaste comes from as well.
> It's so intriguing, I wonder if the people who are against it haven't even used it properly.
I feel like this is a common refrain that sets an impossible bar for detractors to clear. You can simply hand wave away any critique with “you’re just not using it right.”
If countless people are “using it wrong” then maybe there’s something wrong with the tool.
When it comes to new emerging technologies everyone is searching the space of possibilities, exploring new ways to use said technologies, and seeing where it applies and creates value. In situations such as this, a positive sign is worth way more than a negative. The chances of many people not using it the right way are much much higher when no one really knows what the “right” way is.
It then shows hubris and a lack of imagination for someone in such a situation to think they can apply their negative results to extrapolate to the situation at large. Especially when so many are claiming to be seeing positive utility.
> If countless people are “using it wrong” then maybe there’s something wrong with the tool.
Not really. Every tool in existence has people that use it incorrectly. The fact that countless people find value in the tool means it probably is valuable.
I had Claude read a 2k LOC module on my codebase for a bug that was annoying me for a while. It found it in seconds, a one line fix. I had forgotten to account for translation in one single line.
That's objectively valuable. People who argue it has no value or that it only helps normies who can't code or that sooner or later it will backfire are burying their heads in the sand.
HN has a huge anti AI crowd that is just as vocal and active as its pro AI crowd. My guess that this is true of the industry today and won’t be true of the industry 5 years from now: one of the crowds will have won the argument and the other will be out of the tech industry.
Vibe coding and slop strawmen are still strawmen. The quality of the debate is obviously a problem
I don’t understand why people are so resistant to the idea that use cases actually matter here. If someone says “you’re an idiot because you aren’t writing good, structured prompts,” or “you’re too big of an idiot to realize that your AI-generated code sucks” before knowing anything about what the other person was trying to do, they’re either speaking entirely from an ideological bias, or don’t realize that other people’s coding jobs might look a whole lot more different than theirs do.
We don’t know anything about the commenters other than that they aren’t getting the same results with AI as we are. It’s like if someone complains that since they can’t write fast code and so you shouldn’t be able to either?
What we call AI at the heart of coding agents, is the averaged “echo” of what people have published on the web that has (often illegitimately) ended up in training data. Yes it probably can spit out some trivial snippets but nothing near what’s needed for genuine software engineering.
Also, now that StackOverflow is no longer a thing, good luck meaningfully improving those code agents.
Exactly this. Everything I've seen online is generally "I had a problem that could be solved in a few dozen lines of code and I asked the AI do it for me and it worked great!"
But what they asked the AI to do is something people have done a hundred times over, on existing platform tech, and will likely have little to no capability to solve problems that come up 5-10 years from now.
The reason AI is so good at coding right now is due to the 2nd Dot Com tech bubble that occurred between the simultaneous release of mobile platforms and the massive expansion of cloud technology. But now that the platforms that existed during that time will no longer exist, because it's no longer profitable to put something out there--the AI platforms will be less and less relevant.
Sure, sites like reddit will probably still exist where people will begin to ask more and more information that the AI can't help with, and subsequently the AI will train off of that information; but the rate of that information is going to go down dramatically.
In short, at some point the AI models will be worthless and I suspect that'll be whenever the next big "tech revolution" happens.
If however, your code foundations are good and highly consistent and never allow hacks, then the AI will maintain that clean style and it becomes shockingly good; in this case, the prompting barely even matters. The code foundation is everything.
But I understand why a lot of people are still having a poor experience. Most codebases are bad. They work (within very rigid constraints, in very specific environments) but they're unmaintainable and very difficult to extend; require hacks on top of hacks. Each new feature essentially requires a minor or major refactoring; requiring more and more scattered code changes as everything is interdependent (tight coupling, low cohesion). Productivity just grinds to a slow crawl and you need 100 engineers to do what previously could have been done with just 1. This is not a new effect. It's just much more obvious now with AI.
I've been saying this for years but I think too few engineers had actually built complex projects on their own to understand this effect. There's a parallel with building architecture; you are constrained by the foundation of the building. If you designed the foundation for a regular single storey house, you can't change your mind half-way through the construction process to build a 20-storey skyscraper. That said, if your foundation is good enough to support a 100 storey skyscraper, then you can build almost anything you want on top.
My perspective is if you want to empower people to vibe code, you need to give them really strong foundations to work on top of. There will still be limitations but they'll be able to go much further.
My experience is; the more planning and intelligence goes into the foundation, the less intelligence and planning is required for the actual construction.
And it requires a bit of prompt engineering like using caps for some stuff (ALWAYS), etc.
I should probably stop commenting on AI posts because when I try to help others get the most out of agents I usually just get down voted like now. People want to hate on AI, not learn how to use it.
Why not give the punters what they want?
After rearchitecting the foundations (dumping bootstrap, building easy-to-use form fields, fixing hardcoded role references 1,2,3…, consolidating typescript types, etc.) it makes much better choices without needing specific guidance.
Codex/Claude Code won’t solve all your problems though. You really need to take some time to understand the codebase and fixing the core abstractions before you set it loose. Otherwise, it just stacks garbage on garbage and gets stuck patching and won’t actually fix the core issues unless instructed.
E.g pumping out a ton of logic to convert one data structure to another. Like a poorly structured form with random form control names that don’t match to the DTO. Or single properties for each form control which are then individually plugged into the request DTO.
Must be my lucky day! Too bad my dream of being that while the bots are taking care of the coding is still sort of fiction.
I love a future when this is possible but what we have today is more of a proof of concept. A transformative leap is required for this technology before it can be as useful as advertised.
A poor foundation is a design problem. Throw it away and start again.
It’s funny how the vibe coding story insists we shouldn’t look at the code details but when it’s pointed out the bots can’t deal with a “messy” (but validated) foundation, the story changes that we have to refactor that.
No projects, unless it's only you working on it, only yourself as the client, and is so rigid in it's scope, it's frankly useless, will have this mythical base. Over time the needs change, there's no sticking to the plan. Often it's a change that requires rethinking a major part. What we loathe as tight coupling was just efficient code with the original requirements. Then it becomes a time/opportunity cost vs quality loss comparison. Time and opportunity always wins. Why?
Because we live in a world run by humans, who are messy and never sticks to the plan. Our real world systems (bureaucracy , government process, the list goes on) are never fully automated and always leaves gaps for humans to intervene. There's always a special case, an exception.
Perfectly architected code vs code that does the thing have no real world difference. Long term maintainability? Your code doesn't run in a vaccum, it depends on other things, it's output is depended on by other things. Change is real, entropy is real. Even you yourself, you perfect programmer who writes perfect code will succumb eventually and think back on all this with regret. Because you yourself had to choose between time/opportunity vs your ideals and you chose wrong.
Thanks for reading my blog-in-hn comment.
Right know I'm building NNTP client for macOS (with AppKit), because why not, and initially I had to very carefully plan and prompt what AI has to do, otherwise it would go insane (integration tests are must).
Right know I have read-only mode ready and its very easy to build stuff on top of it.
Also, I had to provide a lot of SKILLS to GPT5.3
Ha! Yesterday an agent deleted the plan file after I told it to "forget about it" (as in, leave it alone).
There are literally thousands of retro emulators on github. What I was trying to do had zero examples on GitHub. My take away is obvious as of now. Some stuff is easy some not at all.
There are no examples of what you tried to do.
I just hope we don't all start relying on current[1] AI so much that we lose the ability to solve novel problems ourselves.
[1] (I say "current" AI because some new paradigm may well surpass us completely, but that's a whole different future to contemplate)
This should be the first thing you try. Something to keep in mind is that AI is just a tool for munging long strings of text. It's not really intelligent and it doesn't have a crystal ball.
Also re: "I spent longer arguing with the agent and recovering the file than I would have spent writing the test myself."
In my humble experience arguing with an LLM is a waste of time, and no-one should be spending time recovering files. Just do small changes one at a time, commit when you get something working, and discard your changes and try again if it doesn't.
I don't think AI is a panacea, it's just knowing when it's the right tool for the job and when it isn't.
The article's easy/hard distinction is right but the ceiling for "hard" is too low. The actually hard thing AI enables isn't better timezone bug investigation LOL! It's working across disciplinary boundaries no single human can straddle.
Someone mentioned it is a force multiplier I don't disagree with this, it is a force multiplier in the mundane and ordinary execution of tasks. Complex ones get harder and hard for it where humans visualize the final result where AI can't. It is predicting from input but it can't know the destination output if the destination isn't part of the input.
Needless to say, he was wrong and gently corrected over the course of time. In his defense, his use cases for LLMs at the time were summarizing emails in his email client.. so..eh.. not exactly much to draw realistic experience from.
I hate to say it, but maybe nvidia CEO is actually right for once. We have a 'new smart' coming to our world. The type of a person that can move between worlds of coding, management, projects and CEOing with relative ease and translate between those worlds.
Sounds just like my manager. Though he never has made a proclamation that this meant developers should be 10x as productive or anything along those lines. On the contrary, when I made a joke about LLMs being able to replace managers before they get anywhere near replacing developers, he nearly hyperventilated. Not because he didn't believe me, but because he did, and already been thinking that exact thought.
My conclusion so far is that if we get LLMs capable of replacing developers, then by extension we will have replaced a lot of other people first. And when people make jokes like "should have gone into a trade, can't replace that with AI" I think they should be a little more introspective; all the people who aspired to be developers but got kicked out by LLMs will be perfectly able to pivot to trades, and the barrier to entry is low. AI is going to be disruptive across the board.
This is very much a hot take, but I believe that Claude Code and its yolo peers are an expensive party trick that gives people who aren't deep into this stuff an artificially negative impression of tools that can absolutely be used in a responsible, hugely productive way.
Seriously, every time I hear anecdotes about CC doing the sorts of things the author describes, I wonder why the hell anyone is expecting more than quick prototypes from an LLM running in a loop with no intervention from an experienced human developer.
Vibe coding is riding your bike really fast with your hands off the handles. It's sort of fun and feels a bit rebellious. But nobody who is really good at cycling is talking about how they've fully transitioned to riding without touching the handles, because that would be completely stupid.
We should feel the same way about vibe coding.
Meanwhile, if you load up Cursor and break your application development into bite sized chunks, and then work through those chunks in a sane order using as many Plan -> Agent -> Debug conversations with Opus 4.5 (Thinking) as needed, you too will obtain the mythical productivity multipliers you keep accusing us of hallucinating.
Gemini in Antigravity today is pretty interesting, to the point where it's worth experimenting with vague prompts just to see what it comes up with.
Coding agents are not going to just change coding. They make a lot of detailed product management work obsolete and smaller team sizes will make it imperative to reread the agile manifesto and and discard scrum dogma.
Tried to move some excel generation logic from epplus to closedxml library.
ClosedXml has basically the same API so the conversion was successful. Not a one-shot but relatively easy with a few manual edits.
But closedxml has no batch operations (like apply style to the entire column): the api is there but internal implementation is on cell after cell basis. So if you have 10k rows and 50 columns every style update is a slow operaton.
Naturally, told all about this to codex 5.3 max thinking level. The fucker still succumbed to range updates here and there.
Told it explicitly to make a style cache and reuse styles on cells on same y axis.
5-6 attempts — fucker still tried ranges here and there. Because that is what is usually done.
Not here yet. Maybe in a year. Maybe never.
People seem to think engineers like "clean code" because we like to be fancy and show off.
Nah, it's clean like a construction site. I need to be able to get the cranes and the heavy machinery in and know where all the buried utilities are. I can't do that if people just build random sheds everywhere and dump their equipment and materials where they are.
https://news.ycombinator.com/newsguidelines.html
- Being forced to use AI at work
- Being told you need to be 2x, 5x or 10x more efficient now
- Seeing your coworkers fired
- Seeing hiring freeze because business think no more devs are needed
- Seeing business people make a mock UI with AI and boasting how programming is easy
- Seeing those people ask you to deliver in impossible timelines
- Frontend people hearing from backend how their job is useless now
- Backend people hearing from ML Engineers how their job is useless now
- etc
When I dig a bit about this "anti-AI" trend I find it's one of those and not actually against the AI itself.
But regardless, services are extremely cheap right now, to the point where every single company involved in generative AI are losing billions. Let’s see what happens when prices go up 10x.
Whatever the value/$ is now, do you really think it is going to be constant?
It's exhausting.
There are legitimate and nuanced conversations that we should be having! For example, one entirely legitimate critique is that LLMs do not tell LLM users that they are using libraries who are seeking sponsorship. This is something we could be proactive about fixing in a tangible way. Frankly, I'd be thrilled if agents could present a list of projects that we could consider clicking a button to toss a few bucks to. That would be awesome.
But instead, it's just the same tired arguments about how LLMs are only capable of regurgitating what's been scraped and that we're stupid and lazy for trusting them to do anything real.
I swear this is the reason people are against AI output (there are genuine reasons to be against AI without using it: environmental impact, hardware prices, social/copyright issues, CSAM (like X/Grok))
It feels like a lot of people hear the negatives, and try it and are cynical of the result. Things like 2 r's in Strawberry and the 6-10 fingers on one hand led to multiple misinterpretations of the actual AI benefit: "Oh, if AI can't even count the number of letters in a word, then all its answers are incorrect" is simply not true.
To preempt that on my end, and emphasize I'm not saying "it's useless" so much as "I think there's some truth to what the OP says", as I'm typing this I'm finishing up a 90% LLM coded tool to automate a regular process I have to do for work, and it's been a very successful experience.
From my perspective, a tool (LLMs) has more impact than how you yourself directly use it. We talk a lot about pits of success and pits of failure from a code and product architecture standpoint, and right now, as you acknowledge yourself in the last sentence, there's a big footgun waiting for any dev who turns their head off too hard. In my mind, _this is the hard part_ of engineering; keeping a codebase structured, guardrailed, well constrained, even with many contributors over a long period of time. I do think LLMs make this harder, since they make writing code "cheaper" but not necessarily "safer", which flies in the face of mantras such as "the best line of code is the one you don't need to write." (I do feel the article brushes against this where it nods to trust, growth, and ownership) This is not a hypothetical as well, but something I've already seen in practice in a professional context, and I don't think we've figured out silver bullets for yet.
While I could also gesture at some patterns I've seen where there's a level of semantic complexity these models simply can't handle at the moment, and no matter how well architected you make a codebase after N million lines you WILL be above that threshold, even that is less of a concern in my mind than the former pattern. (And again the article touches on this re: vibe coding having a ceiling, but I think if anything they weaken their argument by limiting it to vibe coding.)
To take a bit of a tangent with this comment though: I have come to agree with a post I saw a few months back, that at this point LLMs have become this cycle's tech-religious-war, and it's very hard to have evenhanded debate in that context, and as a sister post calls out, I also suspect this is where some of the distaste comes from as well.
I feel like this is a common refrain that sets an impossible bar for detractors to clear. You can simply hand wave away any critique with “you’re just not using it right.”
If countless people are “using it wrong” then maybe there’s something wrong with the tool.
It then shows hubris and a lack of imagination for someone in such a situation to think they can apply their negative results to extrapolate to the situation at large. Especially when so many are claiming to be seeing positive utility.
Not really. Every tool in existence has people that use it incorrectly. The fact that countless people find value in the tool means it probably is valuable.
I had Claude read a 2k LOC module on my codebase for a bug that was annoying me for a while. It found it in seconds, a one line fix. I had forgotten to account for translation in one single line.
That's objectively valuable. People who argue it has no value or that it only helps normies who can't code or that sooner or later it will backfire are burying their heads in the sand.
Doesn't mean the hammers are bad, no matter how many people join the community.
You need to learn how to use the tools.
Doesn’t mean the tool is actually useful, no matter how many people join the community.
Vibe coding and slop strawmen are still strawmen. The quality of the debate is obviously a problem
If only there were things called comments, clean-code, and what have you
Also, now that StackOverflow is no longer a thing, good luck meaningfully improving those code agents.
But what they asked the AI to do is something people have done a hundred times over, on existing platform tech, and will likely have little to no capability to solve problems that come up 5-10 years from now.
The reason AI is so good at coding right now is due to the 2nd Dot Com tech bubble that occurred between the simultaneous release of mobile platforms and the massive expansion of cloud technology. But now that the platforms that existed during that time will no longer exist, because it's no longer profitable to put something out there--the AI platforms will be less and less relevant.
Sure, sites like reddit will probably still exist where people will begin to ask more and more information that the AI can't help with, and subsequently the AI will train off of that information; but the rate of that information is going to go down dramatically.
In short, at some point the AI models will be worthless and I suspect that'll be whenever the next big "tech revolution" happens.