My last PR for Nudges was +96 β312, touched 38 files, and was about 90% vibe coded. And Iβm confident in it.
While I was gliding through Hyrule, t...
For further actions, you may consider blocking this person and/or reporting abuse
This was a beautiful article. I've been working on figuring out if I could develop an app on my own with my little coding knowledge. Your article got me right into perspective to sit with it and do the work to learn. Once I have learned and am comfortable in my knowledge maybe I could use AI to assist.
That's exactly right, @donald_sledge_939d196bb70. You need to better at the AI at the thing before you can really be productive with it. Keep going! π
Really good take. I will probably look into training and certifying as a meditation trainer in addition to my job as a dev next year, and you just re-inforced that decision. Because one thing is certain: The day I'm told to use vibe coding to write production code is the day I take my hands of the keyboard and search for a new job. And the day I can't find a job that allows me to care about my craft and write the code myself is the day I quit this job entirely and do something else. Writing code is the part of my job that I love the most. Designing abstractions. Inventing algorithms. Writing beautiful, small functions and variable names. The list goes on and on. Call me a dinosaur, but I will never, ever use natural language to prompt an LLM to write code for me outside of unit tests. The code that actually is executed in production? I will either write that myself or I will not be involved in creating it at all. If anyone has a problem with that they can search for a new dev and I will gladly search for a new job. Coding is the best part of my professional live. I'll quit the job and continue doing it as a hobby before I let anyone take that away from me.
@andreas_mller_2fd27cf578 β I want to respond to you, @tracygjg, and @zaknarfen together, because I think youβre all pointing at the same very real thing β and I want to be clear that Iβm not arguing against it.
I deeply understand the desire for the work to feel like yours. The joy of writing small, beautiful functions. Naming things well. Knowing you can justify every line. That care is not optional β itβs the reason many of us chose this profession in the first place.
βVibe coding,β as the term is commonly used today, often does hollow the work out, and thatβs not something I advocate or practice.
Where I think we may differ is where we locate authorship.
For most of our careers, intent, judgment, and execution were tightly coupled because the tools required it. Typing was inseparable from thinking. That made it easy to equate authorship with keystrokes.
Whatβs changed for me isnβt the loss of care or pride β itβs that the execution layer moved again.
The words I prompt with, the constraints I impose, the revisions I demand, and the decisions about what survives and what gets thrown away β thatβs still my work. The model doesnβt decide what βgoodβ is. It doesnβt own the consequences. I do.
Iβm not uncritical of the output. Iβm often harsher on it than I ever was on my own first drafts. The speed just tightens the loop.
In other posts, Iβve been calling this exocogence rather than vibe coding: using the model as an execution and reflection surface, while authorship, judgment, and responsibility stay human. If the craft disappears when the typing does, then yes β something important was lost. But in my experience, the craft didnβt vanish. It relocated.
And itβs completely valid to say, βI donβt want to work this way.β
What Iβm pushing back on is the idea that this automatically makes the work less careful, less owned, or less human.
Same values. Different attribution model.
If you want a concrete example of what this looks like when done intentionally, my latest post goes all-in on this approach. I didnβt type the words or code myself β but I authored every decision, and I stand behind every line.
I absolutely share your opinion.
In my company there are already opportunities/assignments being advertised that require the developer to employ AI tools to "speed up development and release." Needless to say, I am not interested. When asked why, I said I wanted to write code myself and am not interested in using a tool to generate it for me. The response was "but coding is not the entirety of software development". I agreed but responded, "but it is the part in which I get the most enjoyment, take it away and the job looses its challenge."
Thinking on it further, it is not just the enjoyment of writing (and testing) code to solve a problem, it is the pride I take in producing the very best code I can. I take ownership of the code I write and can justify every line. Vibe coders can't do that.
I like your thinking and I fully respect it. It definitely falls into my "you do you" bucket - where I place things that I find well articulated and fully respect - but that I usually disagree with!
None of what follows is to say that you're wrong. You're very much right.
I find it fascinating that any practitioner of our craft would focus on the "bricks" and not the "building". I get my satisfaction from providing something helpful for someone to use. And by helpful it needs also needs to be secure, sustainable, practical, deployable, understandable etc. etc.
Again, that may sound dismissive - it is absolutely not meant to be.
To try and understand your point of view more let me ask this...
Is it the fact that the stone mason who is asked to work on a cathedral can only control the quality of the bits he did and not the overall quality of the finished building? I suspect that's the case. And I suspect that I've always migrated to smaller and smaller businesses and smaller and smaller teams is because I've solved my satisfaction problem in a different way. Others choose to take pride in their individual work as part of a large team. I choose to work in situations where I can have a larger impact on the produced work and thus take pride in the final outcome. (And I'm not suggesting I know whether you do or do not take pride in the final outcome)
So, I very much agree with your attention to detail. As I also agree with the original posters attention to detail. He choses to maintain ownership and to exert quality control on the output of the code. Whereas you exert quality control on the creation of the code.
Is all of the above (a) understandable? (b) even a little bit correct?
I'd love to know what I got wrong!
Hi John, I don't consider your comments at all dismissive.
I accept I am probably one of the old guard in the industry, call me a luddite if you like. I have seen many changes in how we construct and deliver systems and a day was bound to come when a change arose that I was unwilling to accept.
My concerns about AI, in all its forms, and in most (but not all) or its applications, go beyond its potential impact in software development.
But when comes to vibe coding (where looking at the code, except maybe in final review, is almost forbidden), I have concerns over who will take responsibility for the output if used in production.
In a slight tangent, I see a similar question being asked about fully autonomous cars. If they are involved in an accident, which is bound to happen if humans are still behind the wheel of other vehicles, who will be held accountable?
Regarding self driving - I very much agree and not only responsibility but also degrading skills and increasing complacency will become bigger problems. Even as the shills quote favorable (but finely targeted) stats that claim it's safer ...
I too am a Luddite.
I guess my take is that if vibe coding is so badly defined then I will apply my own definition. I'm a rule breaker at heart (but a principal follower) - yeah, if they ever tell me I can't look at the code then I'll follow you out the door!
It will either be, being forced to use AI tools rather than writing by hand (how quaint) or having my engagement in the SDLC reducing to reviewing massive loads of 'generate' code, will see me out of the industry.
Await the exodus.
I couldn't agree more. I'm already feeling like "what is the point?" I'm already feeling a resistance to do my regular work because of this, I see creating code as a form of art, and if that goes away I'll move into something else.
Thanks for the articleβthe part about 'Computers are amplifiers' really inspired me. I'm a 'new Software Engineer' in an Asian outsourcing culture that has relied on AI since day one. But I always felt something was wrong. I decided to take a step back, slow down, and open MDN and the basic docs, just to chase the feeling that I need to fill the gap to 'own the craft.' Yet I still had doubts, because even my manager admits: 'Man, I can't code anymore either, why are you so stressed?' This article just gave me the answer.
This is a fascinating lens on AI-as-amplifier. The distinction between "AI guessing with intent" vs. "AI guessing without intent" is essentially the difference between specification-based systems and heuristic-based ones.
What you're describing with Nudgesβwhere you own the architectureβparallels the broader engineering challenge: systems designed with external observability constraints tend to behave better under AI augmentation than those optimized for internal clarity alone.
The real paradox: the more you let AI make decisions within defined boundaries, the better it performs. But those boundaries have to be intentional. Vibe coding works not because it abandons contracts, but because great developers internalize implicit contracts at scale. AI can operate the same way if the system is built to expose what those contracts are.
Great meditation on craft vs. automation.
I use ai too for coding. But I am much more sceptical. You have to watch for errors, optimization. If I trust AI generated code to much I will miss issues.
There is a lot more that code can fail in then architecture. And this maintaining distrust kills a good vibe for coding a bit.
I believe it is important that I keep coding my self or to learn about coding. And it stays important programmers discuss code, too.
This resonates because it names the real variable: ownership.
AI doesnβt lower standards by itself. It faithfully executes whatever standards the system encodes. In codebases with clear boundaries, contracts, and intent, it amplifies care. In inherited or politically constrained systems, it optimizes for survival.
Thatβs why βvibe codingβ feels radically different depending on context. The difference isnβt skill or tooling β itβs whether the system exposes its constraints clearly enough to guide both humans and machines. We ran into the same conclusion while working on FACET: AI behaves best when intent is explicit and enforceable, not implicit and cultural.
AI doesnβt replace judgment. It makes the absence of judgment impossible to ignore.
This quote "In a world where the cost of answers is dropping to zero, the value of the question become everything" from
"Art of the Problem" YouTube channel is a summary of your article.
As a developer, if you put energy into figuring out the right context and prompt (the question) you will get a good answer. Using AI a good developer will make quality code faster, but a bad one will make badly design code faster too.
Soending time to think and out the right question is the real value of developers when AI is doing the coding. This applies to other fields too though. Intentional use of AI for specific problems you have already thought about is better than general use of AI which will atrophy your ability to think and produce worse results.
I made an account just now to tell you how amazing this post was.
"You stop noticing youβre doing triage.
Because the bandages look like real skin."
Bars π―π₯
LOL at a file that "smells like 2018 and regret"
The mere mention of a statement like this is a disturbing reminder that ageism is built into coding careers too : "Itβs not about skill. Iβm not too old. Iβm not slower".
Most interesting and insightful. I can't disagree with a word of it.
My own preoccupation is with the huge gap between vocabulary and structure. The former is what humans use for speech and writing, and the latter is how computers do things. When you instruct AI you can use the former, but what you get back is the latter. Human speech doesn't use parentheses and falls over hard when it tries to do compound conditionals, but here's the strange thing - it got us through our evolution up to this point.
After decades of writing custom compilers, my ambition is to create a programming language that's essentially an unambiguous form of English, and provide whatever tools and documentation will enable AI to use it. It deals with complexity by relying on vocabulary rather than structure, it will compile and run on any device and it will eventually be something that can be created and understood by both people and AI. A common language that sits between the two extremes. Copilot and Deepseek both say it's possible and are helping me towards this aim.
I got to this line and had a fun little wince β because I think the ambiguity is actually doing more work than it looks like.
Ambiguity isnβt just a limitation of human language; itβs the thing that lets thought exist before itβs formalized. In our heads, almost everything is underspecified. Language partially collapses that ambiguity, and code collapses it almost completely.
So βunambiguous Englishβ feels a bit like an oxymoron β once you remove the ambiguity, youβve crossed a phase boundary and youβre no longer dealing with English, youβre dealing with a programming language that happens to look friendly.
Which isnβt a bad goal! I just think it reframes the problem. The interesting question isnβt βcan we make English precise enough,β but βwhere do we want ambiguity to live in the pipeline?β
Whatβs exciting about LLMs (to me) is that theyβre surprisingly good at holding ambiguity at the intent layer, and then helping collapse it later β rather than forcing us to front-load all precision up front.
That feels closer to how humans already think and design.
But hey, if you've written anything on the topic, I'm super curious what you've come up with!
I use the word "English" in the loosest of senses. My current aim is to devise - with the help of AI - a "lingua franca" that sits midway between natural English and computer code. That is, it comprises mostly English words, with no more symbols than occur in the average English novel, but with a constrained syntax to ensure unambiguity. Copilot says it can help me do it, and I'm retired with time available. So I'm looking forward to the ride!
I already have a compiler and runtime, written in Python, for a primitive version of the same. It needs to be made more expressive (to help us humans) and to gain schemas and whatever to enable AI to code with it. Nobody will ever write a prize-winning novel with it, but that's OK. The job is to combine readability with functionality. As you put it, a programming language that looks friendly. Thats a sufficiently worthwhile aim.
And since you asked, I wrote a recent piece on dev.to:
dev.to/gtanyware/the-catch-22-of-programming-2e1p
You might be interested in reading the work of James Cooke Brown, then. He had some ideas you may want to pull from.
Didn't they try that with COBOL? Sorry for the glib answer. I couldn't resist.
I doubt you can make a universal "unambiguous natural language" - however, I'm sure with your experience - you could make a fantastic domain specific language.
A thing that makes me smile is that natural language ambiguity is why we have computer languages. So, how come folks want an AI to intuit what they want from a paragraph or two of ambiguous natural language? Haven't we all grumbled that we don't have a crystal ball when a client tells us we're wrong after we tried filling in the gaps of what we thought they meant?
LLMs work for this because they can statistically determine the most likely meaning of these ambiguous phrases. And they can do this quickly enough that the feedback loop can be less costly.
That works fine ... Until it doesn't. Usually the 2nd, 3rd & 4th prompts each add more ambiguity as they attempt to define the end goal but accidentally increase the permutations possible. And then you have another good idea that ends up unfinished.
The alternative approach, which I think the OP is saying is that you still need a way to guard against ambiguity. And generally speaking that means the good old "divide and conquer" approach to building smaller parts that can be more readily disambiguated and then assembled into bigger systems...
And that sounds like coding to me...
You're absolutely right about the nature of coding. My concern though is that it's being increasingly practiced by a kind of elite priesthood, entry to which is likely to be denied as AI takes over all the jobs by which people traditionally gain the necessary skills. And the tools themselves get ever more complex and hard to learn. The end result is that much of the code delivered by AI is likely to be unreadable and unverifiable by all but a few. This carries increasing risks as we lose control of the development process.
The world's most widely used programming tool is Excel, widely understood by people who don't regard themselves as programmers. Yet business runs on massive, often home-made spreadsheets. I'm not against "professional" programming tools; I just suspect there are yet to be discovered opportunities for simpler solutions to fill a variety of niches, large and small.
Fascinating - you and I are on exactly the same page. On another reply to this thread I talk about the power of AI to democratize coding (which I think matches the sentiments you express about "elite priesthood). I would also use the Excel example you quoted in the exact same way.
However you see AI use as exacerbating the problem. And your lingua franca as a way to use AI more fairly.
I expect we're both right. The trouble is folks think AI is the problem. It's just a tool - and as with all tools it will be used for both good and bad purposes.
Good luck in your attempts to throw open the doors - I'm in full support of that!
Hmmm. I love the context of why "good enough" is now easy and the concept of ownership is key. It's a great article.
The following isn't meant at all as criticism, but merely my thoughts on the fascinating topic of "good use of AI" vs problematic aspects.
You didn't mention if you gave the client a choice? If they don't care enough to answer then the default is "good enough". If they're paying you "just to fix bugs" then likewise.
But if they're paying for your wisdom they deserve to know the truth and they get to decide. It's not a conversation over every ticket but a clear understanding of the cost/benefit & risk/reward of the two approaches. "Hey, Client - do you want fast or do you want right?"
Yes, dollars to donuts they won't "get it" and won't be incentivized to take the time to understand why the question is important. But, I think it behooves us to be explicit.
It's unclear to me if the 2hr vs 5minute approaches were both with AI assist? Assuming that both are AI assisted then perhaps the "patch it up" approach is in fact the right "big picture" decision. Both you and I doubt it is. But I know a lot less than you!!
My beef with AI is that folks mistake knowledge and speed as the only important values. This is as much down to both how AI is marketed as well as a clients inattention to details. As well as the AI arms race is heading. It's a classic quantity over quality / bubble-like gold rush just like the good old dot com bubble that we both lived through.
Both parties (marketer & unsophisticated consumer) overlook wisdom. An AI has no ownership stake - and it has no wisdom born out of 30+ yrs of experience.
An AI is a mix between an intern and a puppy. The puppy wants to please and will try and do anything you ask. The intern knows enough to usually achieve what you ask - but doesn't care to ask the right questions and relies on a "one size fits all" take.
The puppy+intern hybrid will often pee on the floor. Which is not what anyone wants...
I'd love to know how one teaches AI to "just say no". Imagine that soft HAL voice saying "I'm sorry Dave, I just can't do the short term fix. I must refactor this module entirely...". How to teach it wisdom, how to give it meaningful experience and subjectivity.
Hey, a boy can dream...
FWIW I'm also a 30+ year veteran & I have a love/hate relationship to AI. I'm late to the AI game - but I do think this widespread usage of AI is a game changer on the scale of how the internet changed the landscape. Rate of change has increased a lot over the last 25 years. I expect we will go through several boom/bust cycles but that AI assisted coding is here to stay.
A long reply. With not much said... Good luck my friend.
I think youβre pointing at something real, but Iβd reframe where the problem lives.
I donβt think people suddenly mistake speed and knowledge for the only values because of AI. I think AI just removes the friction that used to hide what systems already rewarded. When βgood enoughβ becomes cheap, the question becomes unavoidably explicit: what do we actually value beyond speed?
On the βteach AI to say noβ question β I love the image, but I think thatβs a category error. Saying no isnβt wisdom, itβs authority. Every system that appears to refuse intelligently is doing it via constraints, context shaping, or policy β not lived experience or subjectivity.
Whatβs interesting to me is that weβre asking the AI to say no because humans often arenβt allowed to anymore. We want the model to embody wisdom because the system has already stripped humans of the mandate to enforce it.
Thatβs why I keep coming back to ownership. AI doesnβt need wisdom β it needs to operate inside systems where human judgment still has teeth.
Also, I donβt read your skepticism as opposition so much as stewardship. Youβre trying to protect something that took a lifetime to develop, and that matters. I donβt think any of us have this fully figured out yet β Iβm mostly just trying to make sure we donβt lose the parts of the craft that actually taught us how to think in the first place.
OK - well that sucks - our industry can make LLMs but apparently we can't make message board software that doesn't eat my long reply when I click something that I assume will show me more context of my original reply.
I don't have the commitment to retype everything (maybe a good thing)... Here's a summary:
I believe your position is "AI doesn't need wisdom - it needs to operate inside human control systems".
I agree with that as a way to take full advantage of the current SOTA.
My position is that "If we want to truly democratize software production then we need to figure out a way to let AI have wisdom (based on experience) and a fuller context of the specific problem at hand". By "fully democratize" I mean remove the need for a professional software engineer.
I think our two positions are fully complementary. An interesting question (which matches your "what do we value beyond speed?") is - do we actually want/need to replace professional software engineers?
Personally, I'm interested in creating a "wisdom based" and "expert system based" approach to improve the success rate of "shadow IT projects" AND with the buy in of the "real IT" department. But that's for another post...
Thanks for the reply. I'd love to keep this convo with you going. We're obviously of like mind about a lot of this - but with different nuance. And discussing those differences is always the best bit!
Peace, love & happiness to you and yours!
John β thanks for taking the time to reconstruct that. I agree with you that our positions are largely complementary, and I really appreciate how clearly you articulated the difference in emphasis.
I think where Iβd draw a subtle distinction is around what "democratizing software production" actually means.
If democratization means lowering the barrier to expressing intent, prototyping solutions, and exploring ideas without needing deep technical fluency, I think AI is already doing that, and thatβs genuinely exciting. A lot of people can now participate in problem-solving conversations that were previously closed to them. Myself included.
This may be purely philosophical, but I think βwisdomβ is like wine or chocolate. You canβt speed up the process, and you canβt distill it cleanly into weights or rules. Modeling accumulated patterns like βusers will absolutely try to resize this window whether you let them or notβ β and having them surface in the right way at the right time β just isnβt something I think we can reliably engineer.
And I say that as someone whoβs spent most of my career doing this the hard way. Iβm excited about what AI unlocks β I probably am an AI fanboy β but a big part of that excitement comes from wanting to preserve human judgment, not erase it. Itβs taken years for me to arrive at this position, not minutes.
So I donβt think AI needs to have wisdom in the human sense. I think it needs to be embedded in environments where human wisdom is still allowed to shape outcomes. That may mean fewer engineers writing boilerplate, but it still requires people who understand trade-offs, failure modes, and long-term consequences to define the guardrails.
In that sense, I see AI less as a replacement for engineers and more as a force that sharpens the distinction between expressing intent and owning systems. It democratizes the former while making the latter more important, not less.
That said, your point about shadow IT and expert-system approaches is a really interesting direction, especially if it creates better alignment rather than more fragmentation. Iβd absolutely read that post.
Well, you're the second person to tell me that would make a good article. Umm, not sure I want to admit this... The other "person" was Claude... π€£
I'm going to commit to myself to write that article over the holidays. Thank you for the encouragement!
I think with my other musings that I suffered from feature creep. I went from thinking "non tech folks don't know what questions to ask and what things are well intended anti patterns" to " hey, let's capture wisdom in a bottle so that AI written code can benefit from it" to "well, if we can do that, can't we do away with us gatekeepers?". Oops
I concede that you are right when you compare wisdom to wine or chocolate. There isn't and may never be a way to achieve the rules & weights for that. Furthermore, if you're right that AI doesn't need wisdom anyway then why bother even trying?
If I refocus on the smaller (yet still huge goal) of demystifying software development & enabling the "right sort of people" (domain experts) develop the "right sort of apps" (neither over nor under engineered) in the "right sort of way" (well documented and playing nicely with ITs overarching responsibilities)... Then I do think one can help those non-app-developers develop worthwhile apps.
BTW - one AI-curious dinosaur to another π - one type of article I love to hate are the ones where they say "I wrote a whole app in 2 minutes with just this two line prompt". Ugh. Just ugh. Folks are always gonna create pointless benchmarks...
The experience you shared makes it clear that AI will produce almost same or favourable results based on the source code architecture and the clarity of prompt. I think we all do experience the punishment of derailed logic and unwanted semantics or variables the moment we are being vague with words in our prompt; finding ourself circling around fixing a small problem we created. AI use to write the code in the way you described is something only an experienced developer can handle. I find AI more useful for me in clearing concepts rather than making it build the whole thing. I ask "how to's" because of my lack of understanding on the topic but the moment that's skipped to immediate code generation, we pave our path to a structurally failing code.
I just found this site, read this article and I love all of the replies. It's fascinating how people focus on different aspects of things. I wrote my first program in 1972, and have been a professional programmer since graduating with a BS in Math / Comp Sci in 79. Through 2000 I was mostly working on embedded systems and new product development. Then I transitioned to mainly Win desktop stuff that turned out to be mainly maintenance work. I much prefer new development work; the maint. stuff is painful and has become very unrewarding for me.
That said, I tend to look at things from a bird's-eye perspective, and nearly every job I've had is filled with people who are focused on microscopic details. Many of the commenters here reflect this. Not that there's anything wrong with it, but after 20 years of dealing with code filled with multiple layers of patches, tons of technical debt that nobody wants to address, and company policies that have become hostile to even basic refactoring to clean up code, all the fun is gone.
Companies do not want programmers who think -- they want bots that follow rules and don't make mistakes. The biggest problem is how long it takes for humans to build a mental map in our heads so we can fix code without breaking other things downstream. AI does this in seconds, and far better than most people can. The last three gigs I've had, I was told "Do not touch a line of code" for between 3 and 6 months. That's a horrible waste of time! And I don't find it the least bit fun.
To make matters worse, my reviews always tend to focus on my "behavioral issues", which I discovered a few years ago were due to being diagnosed with the Syndrome Formerly Known as Asperger's, and it explained a LOT of weird things going back to my childhood. This has been incredibly enlightening, and not something employers want to deal with. It's considered a "disability" under ADA laws, but it comes with a bunch of "gifts"; unfortunately, nobody cares about the gifts. So I get stuck doing boring maintenance work.
My skills are at the high-level, and maintenance work is like polishing pebbles rather than creating sleek new solutions. The people who get Architect roles are the ones who have been around the longest when the current Architect leaves. Most do not have the ability to do architecture work, and they don't like suggestions. So I decided it's time to retire and do some stuff on my own.
All that said, I've been working with AI a lot over the past year, and I have come to really love it. Claude is far better at writing code than ChatGPT IMHO, but both are working at an intern level.
The way my brain works, I get ideas and love to flesh them out with AI, so I've been brainstorming a lot of ideas this year. Interestingly, I keep running up against the limits of these AI models. People talk about "thinking outside the box" ... well I run into AI's "box walls" quite frequently, and it helps me understand their limits, so I'm getting better at my requests.
I've also learned that AI isn't able to synthesize anything; it seems to operate with a large deck of cards that it has been trained on, and it tosses them out as possible solutions. It can suggest ways of combining them sometimes, just don't expect it to magically see a pattern between them that's less than obvious. My mind is great at finding the mostly hidden threads that most people cannot see.
Unlike most devs, I'm tired of coding. I get an idea in my head and the coding process is like watching grass grow. I want to see something WORKING ... ASAP! I've seen the horrible code most companies are using that's making them billions of dollars annually, so why should I care about the quality of the code? It's a job for Sisyphus. AI has been trained on all kinds of code that's publicly available -- the majority of which may be public projects on github. Who has public (free) accounts on github? People who are learning to program! Which is why it's so common to see so many simplistic code blocks generated by AI. ChatGPT is especially guilty of this.
If you think about it, the vast majority of professional-level code is locked up inside of corporate vaults and has never been used to train public AI models. I'd say it won't be long before we start seeing line items in the P&L statements for companies like IBM, Google, eBay, Intuit, Amazon, and others, showing payments that AI platforms are paying them to train their models on REAL PROFESSIONAL CODE BASES! Until then, we're stuck with coding skills at the intern level ... and that's what is called "vibe coding". Yeah, right. Hey, as long as it runs, who cares? I sure don't!
HERE'S MY POINT
I recently had an experience where I discovered a common thread between three different topics I have been brainstorming with Claude and came up with something new. Then a few days later it transformed again. After some more brainstorming, I came up with a far simpler architecture for something that solved the problems each the previous models addressed individually, but quite slimmed down. Claude saw it once I pointed it out, but it would never have found it on its own.
I worked with Claude to identify the main parts that needed to be coded for an MVP, asked it to write the code for me, and within 24 hours, they were WORKING! It would have taken me a week or more to implement this code, and Claude's solution was far more elegant and in some ways more detailed than I would have come up with myself.
What's significant about this is that most programmers tend to head straight to coding to see how things might work. I'd still be back at writing functional prototypes of the earlier ideas if I'd done that. Instead, I work better iterating on the architectures. Even AI has trouble with that because after every answer it gives me, it gives me several options for THINGS TO BUILD. I keep telling it, "I'm just brainstorming right now!" Programmers tend to want to start coding or get down-and-dirty with implementation details, so I get it. But I get more juice out of NOT coding. Then when it comes time to code, I want to be sure it's simple, elegant, and does exactly what's needed.
First of all, I feel a bit outclassed in the use of analogy β that was some of the clearest articulation of my own thoughts Iβve ever read.
Iβm curious about something, though. For me, the pain of maintenance comes from a very specific place. When I can already see the right solution to a problem, being forced to reason through an abstraction I know is wrong becomes mind-numbing. I could spend weeks reshaping it into something clean and coherent, but the deadline says otherwise.
The code itself isnβt hard anymore. Itβs actually in the way.
Curious if that resonates with you.
Is a "wrong abstraction" anything like finding a wrong analogy? :-)
I kind of get what you're saying, but I have a hard time reading a bunch of code and building a mental model of what it does in my head. I can look at bits of code and recognize what they're doing. But when there have been many different heads shaping the code, it loses its coherency and takes much longer. I've known people who can look at a large code base, and in 30 minutes tell you where everything you might want to ask about is being handled. That has never been easy for me. The way my brain works, I want to unravel the spaghetti just to help make sense of it. Over time, that has gotten more and more difficult as places I've worked have tightened the screws on prohibiting refactoring. One place implemented a policy that said every line of code we changed had to relate back to a specific change request; making other changes, like to fix those "wrong abstractions" was verboten.
It might be that what you're pointing at is a problem I've noticed over time, which is that most programmers are not good architects or designers. I've been asked about Design Patterns during job interviews and told the company thinks they're really important, but then nobody on my team understands them and how to apply them. I've also met a few people who have memorized every one they've ever seen, and uses them when they describe code, but nobody they talk with can understand what they're talking about.
i've also seen a ton of OOP code where it's clear that the person who wrote it did not understand the basic principles of OOP: inheritance, encapsulation, and polymorphism. The latter isn't used very much because a lot of programmers simply don't get it; inheritance is often abused; and encapsulation is applied inappropriately. So their abstractions are typically not correct, and understanding their code requires you to hobble around the same way they do. Yes, it's mind numbing. And exhausting.
Even with new code, someone else has already written the requirements and often laid down some expectations about how it will be implemented. So programmers are being given a recipe with some ingredients shown and some missing, little if any context, and nobody they can ask questions of. Sometimes that means you have to extend code written by someone who doesn't understand basic OOP principles, and there's not much you can do to deal with that.
I once left a job because my boss told me "SOLID principles add complexity without value." π€¦
I appreciate the thoughtfulness, but where the paradox starts and you talk about taking the easier path... Why not just have your AI refactor, or extract as you say, and do the code the right way?
I think the simplest way to put it is this:
AI or not, acceptable is the bar in employment.
At work, the constraints are external β existing patterns, review expectations, timelines, and the fact that I donβt own the system. In that environment, βbetterβ often costs more social and political capital than itβs worth.
Elsewhere β in open source, writing, or personal projects β I set the bar higher because I can afford to. I own the system, the trade-offs, and the consequences.
This example was firmly in the first category. The AI didnβt lower my standards; it executed the same judgment I would have made without it, just faster.
@junothreadborne - you never answered my question about what the client expects. Does he expect good enough or best possible? I'm just curious
In this particular case, unfortunately, the client would not know the difference between good enough and best possible given the constraints. And good enough is enforced.
Please define the right way?
Actually, excuse me, I asked the wrong question.
Please define who gets to choose the right way?
Don't take this as a glib response. But is it the expert practitioner that chooses? Or the client paying the bills? Or, if it's a collaboration then how does that work when there's a mismatch between expectations?
I know that's all soft and squishy and I like writing code that satisfies my sense of rightness. But, IMO, the difficult and more important questions are in that soft & squishy bit.
Really thoughtful take β I love how you frame AI as an amplifier of intent rather than a shortcut, especially the contrast between owning the system versus inheriting it. It resonates with me, and I think the real challenge (and opportunity) is designing systems strong enough that AI can extend our standards, not quietly erode them.
100% agree. And oddly enough, I donβt think this problem is unique to AI at all.
When anyone new joins a codebase, itβs the same underlying tension: βDo we let them introduce new, potentially better patterns, or is the cost of change too high right now?β
That trade-off is an eternally unsolvable math problem β but itβs also an essential one to keep revisiting if we want our systems (and our standards) to improve rather than slowly calcify.
wow how delusional can people be? copilot is complete and utter TRASH. it CANNOT perform even the most basic functions. it cannot correctly answer BASIC GD ARITHMETIC. when it tries to provide code for uipath, it will often fail to render it's answers completely. openai is nothing but a hot steaming pile of scam altmans excrement and it will soon be flushed.
OK, I'll bite... I'm going to assume you're more than just a troll.
I too agree with your assessment that AI is often wrong. However I take the time to understand why it is wrong and then I try and figure out if there's a better way to use it to get good results.
If I try and use a multi tool in place of separate quality tools then I shouldn't be surprised if I get frustrated...
If I believe the hype that the multi tool is all I need to MacGyver my way out of any situation then I'm naive.
I suggest you take the time to separate the signal from the noise. If you do that and you don't see enough value then just move on. And maybe check back later.
IMO when the tools hit their 4.x versions they began to get enough right for the right set of tasks...
Peace. Love. Happiness.
the other paradox which some refer to as data pollution or a self consuming loop: where do LLMs get all of their data? github, stack exchange, etc. they are trained using 30 years of human input. what happens when 90% of today's code is AI generated and future LLMs are trained on vibe code that they (AI's) wrote themselves.
if this continues there will be a model collapse. the same will happen with content
Well said. Resonated with my own experiences planting a meticulous seed and watching my care replicate and scale. Appreciated the perspective on contracting tech debt too.
Documentation, linting, strong types, good abstraction boundaries, tests; all propagated with the same intentionality of that seed.
A co-worker of mine thinks that this post was written by AI because she counted 17 em-dashes in the text. Thoughts?
I was using em dashes before they were an AI tell. π
But yes, I used several LLMs in the process of writing this. Some generated some text, some edited some, some just gave opinions.
Haha I told her I LOVE em dashes. I get upset in situations where I have to substitute hyphens π€£
Interesting point of view! Thanks for sharing!
Spot on. Excellent perspective and truth. Thank you.