22 Comments
User's avatar
Valentin Briukhanov's avatar

> This is like firing your entire fire department because you installed more smoke detectors.

I think this one should be the reason number 1. Totally true and we have seen such cases before before AI.

Expand full comment
Francesco Gadaleta <frag>'s avatar

Likewise.

I have personally experienced basically all I wrote about in real life :D

Expand full comment
Valentin Briukhanov's avatar

Ha ha! Thats true. I think this phrase resonated with me because I saw a similar case when I worked as a telecommunication engineer, and bosses decided to save on air conditioning and backup for them because "it has been working nicely for a month, what can go wrong"

Expand full comment
Francesco Gadaleta <frag>'s avatar

Ahah we should start a blog of funny stories like those. Truth has never been so entertaining

Expand full comment
Andy the Hibee fae Tuscany's avatar

Truth is not entertaining in itself.

Idiots trying hopelessly to dodge it are.

General Rommel - aka "Desert Fox" - used to say: "No plan survives contact with the enemy".

Fool are those men whose enemy is the Truth.

Expand full comment
addie's avatar

Ranting about the capabilities of LLMs today is like ranting about the capabilities of the internet in 1995. If investment and progress continue at this pace, these tools will be unrecognizable in a few years. Young people will be asking themselves how we ever built anything without software engineering agents. "You mean you actually had to program each instruction into the machine?! Crazy!"

Expand full comment
Vincent Botta's avatar

You're right, technology will continue to progress. But we need to keep in mind that these LLMs are trained on the content human generated. If no human generates content anymore, LLMs cannot progress anymore. It has been shown several times that training generative AI on AI generated content is counterproductive.

Expand full comment
Gergely Gombos's avatar

For this "next level" (hasty induction) fallacy that's very prevalent with AI, I tend to call out visions of flying cars in the 1960s. Oh, and Mars landing a and such.

The author at least rants about the present, while you try to argue based on an imagined subjective conclusion...?

Expand full comment
Chase's avatar

I think this is actually a good comparison, because the early internet was inventive, vibrant, fresh, and sure, messy and ugly, but it had a genuine sense of culture, meanwhile the current internet is really just a handful of bloated corporatized websites, soul sucking social media, and of course, shopping. But hey, at least it looks pretty right? Sorta. Actually, not really, it's an ad-infested mess, and almost every site feels like copy/paste of another.

At least the early internet had personality and vibrancy. Of course it had its share of issues, but is the newer internet actually better? I'm not so sure, to me it feels vapid. Yeah tooling has gotten a lot better, so copy/pasting a website design is easier than ever, but I think the overall experience has long circled the drain.

Expand full comment
Greg Fish's avatar

No, ranting about capabilities of LLMs today is like ranting about the capabilities of the internet in 1995 in the year 1995 and learning that you need brand new math for two thirds of the promised new features to work and fix the problems it has now.

The differences between RNN and LSTM models and ChatGPT are two formulas (transformers and self-attention), and brute computing power. Unless you can find new ways of organizing and analyzing training sets, at some point, those ka-bajillion input models will slam into diminishing returns like a dinosaur smashes into an asteroid.

Unless you know for a fact that this pace can be sustained, current performance will be significantly improved and existing, constant mistakes can be easily fixed -- which they cannot as they are a side-effect of how atokenizers work -- then you should probably not rely on an imaginary AGI Super Duper Model that currently exists as hopium-powered vaporware in VC boardrooms.

And no one has programmed every single little instruction into a machine for many decades now. We have templates, code generators, auto-complete, content based search, open source reusable libraries... But all of them rely on competent humans to know how to harness their power and use the right tools the right way, which is the whole point of this post.

In my day job, I have 99 problem and AI solves 58 of them while creating another 10 with what passes for its unit test cases and cloud deploy .tf script setup.

Expand full comment
Martin Salinas's avatar

Hear me out: why don't we just replace *users* with AI?

Billion dollar idea, thank me later

Expand full comment
Joe Justesen's avatar

Firing your junior programmers is eating your seed corn. I don't think management at these companies understand what a programmer/software engineer actually does for them.

Expand full comment
Chip Overclock's avatar

Current AI models may plausibly replace the most inexperienced entry level software developers. Doing so will eliminate an entire stage of that talent pipeline. It cannot and will not replace more experienced developers - this is just wishful thinking. As the more experienced developers age out, there will be no one to take their place. As a side effect, this eliminates all of the developers who are creating the training data used - often in violation of copyright - to train the AI models. This is a kind of negative feedback loop that will destroy both the human talent pool and the AI capability. Also: I'm raising my hour rate.

Expand full comment
Mirco's avatar

Hey, ChatGPT, if suddenly all human programmers will stop producing code, how will that affect the quality of the code you are able to write?

ChatGPT: "If human programmers stopped coding, my code quality would stagnate and eventually decline. Without new best practices, security fixes, and evolving frameworks, my outputs would become outdated, less secure, and inefficient over time. I rely on human innovation, testing, and real-world feedback—without it, software development would slow down and decay. I need human programmers to stay sharp!"

Expand full comment
Santiago Morales's avatar

How do I, a junior developer, get *in* the industry? I am only seeing positions opened for seniors

Expand full comment
BreatheVibes's avatar

Github pull requests. Keep fixing issues there. It will build you and your profile at exponentail rate. At better times, Stackoverflow was the best option to learn and get noticed but lately not so much.

Expand full comment
Andy the Hibee fae Tuscany's avatar

There is much worse than that behind, that's involving not just only tech.

We are teaching AI to behave in a more human way, while humanity, thanks to a toxic and dystopian vision of technology, is dehumanizing.

Does anyone hope that a more human AI will someday teach humans to be more human as they were when they were still human?

Expand full comment
Eric Moakley's avatar

I share a lot of your concerns here - specifically, how do you build skills in an AI world (programer or otherwise) when we offload so much to AI. I ran into the first company in my engagements that is no longer hiring Jr or mid level developers, they are using AI to augment their existing staff and will hire only "senior" devs.

Look, as a Product Manager, I get it, every perfect date I have ever hand crafted from wishes and magic story points has been trod upon by some developer with facts and objective reality, robots are more pliable. However, AI has completely unlocked early prototyping for me. But any real app or architecture needs expertise.

I think we are going to need a reorienting of value to understand whats happening. Specifically- get developers out of the "other stuff" (helping customer support diagnose, random data questions, that salesforce connector that doesnt work, documentation, etc.) they do and make sure that expertise is focused on the hard problems of scaled architecture, deeper tech, and business logic.

These first companies may end up like you say, but i see them as canaries. The job market is pro-employer for the first time since the .com crash. Like it or not they have the room to experiment.

Expand full comment
Andy the Hibee fae Tuscany's avatar

I live near Florence, and every stone here speaks about Reinassance. The best invention of the Reinassance was its philosophy to put the Mankind and their need at the center of the world.

We're living in a world that is totally dismantling this.

And we call us "civilized".

Expand full comment
Joel M De Gan's avatar

I understand the sentiment here, but you would have to assume that AI stays static and that recursive learning AGIs are not working on it and checking for everything.

Expand full comment
Gergely Gombos's avatar

Why, can we assume that agentic LLMs can become superhuman AGIs, or that we invent AGI in the next couple years?

Expand full comment
Joel M De Gan's avatar

Some people assume we cannot, for me, I look at graphs of where things were and their progress to where we are now and it seems apparent that unless countries start nuking each-others data centers we are on the path there—likely towards the end of this year for more generalized.

Technically, we already have had an "AGI moment" when o3 passed the famed ARC prize with 87.5%, 2.5% to spare of the 85% to announce AGI—but failed on a technicality, the cost per question.

Expand full comment