Who will be the senior engineers of 2035?

Leave a comment
Current affairs

This month, we’re going to explore a question that is a pertinent topic right now: where are the senior engineers of the future going to come from?

After years of post-Covid layoffs, hiring has slowed across the board as companies wait to see how AI efficiency gains and the economy play out. Unfortunately, juniors are caught up in that slowdown: it’s a hard time to be graduating with a computer science degree.

Meanwhile, AI is absorbing the small changes and bug fixes that used to be perfect training tasks for junior engineers, and the managers who traditionally developed early-career talent are stretched thin or being cut entirely. When viewed purely as profit and loss, some short-term rationale can be derived. However, the long-term consequences are worrying.

We’ll start by looking at the pipeline we used to have: how senior engineers traditionally emerged through years of mistakes, mentorship, and low-stakes learning. Then we’ll examine what’s replacing it, and hypothesise whether AI will actually fill the gap. We’ll explore three possible scenarios for 2035, and finish with what this means depending on where you sit in the industry.

If you’d like to dig deeper, here are some related articles from the archive:

  • Coaching explores how managers develop people at different experience levels, and why the approach needs to change as someone grows.
  • Delegation creates career progression looks at how handing over tasks is an act of trust and an opportunity to learn.
  • Use it or lose it covers the risk of skill atrophy in the AI era, and why deliberate practice still matters.

Let’s explore.

The programmer’s path

For decades, the path was well-worn: you got hired as a junior, paired with someone more experienced, and were given tasks that existed as much for your development as for the work itself.

As part of your ramp-up into real-world programming, you’d follow this loop: write some code (often pairing with others), submit a pull request, get feedback that made you rethink your approach, fix it, and learn something. Over time, through repetition and correction, you built judgment: one of the key skills that separates senior engineers from those with less experience.

Learning with other people was not the only path. Some of the most influential figures in computing taught themselves to code before they were adults, tinkering with whatever hardware they could access; learning through curiosity and obsession rather than formal training. I was no different (although I do not claim to be influential): I learned to code on our family computer, building websites with HTML and simple tools in Visual Basic.

Yet, although they may have begun alone, many autodidacts eventually joined or created teams and companies, and there they learned a different set of lessons: how to collaborate, how to navigate trade-offs, how to build things that outgrow your singular contributions. And these are things that are very hard to learn in formal education or on your own.

However, what both paths shared was this: you had to do the work yourself and with others, and the doing was the point.

What’s replacing it

The traditional pipeline is breaking down in several places at once. Hiring freezes, driven by post-pandemic correction and uncertainty about AI’s impact on headcount, mean fewer junior roles: entry-level tech postings have dropped 67% since 2022, and a Stanford study found that employment for software developers aged 22-25 has declined nearly 20% from its late 2022 peak.

Harvard study found the effect is even sharper in firms actively adopting AI: junior employment fell 7.7% relative to non-adopters within six quarters. The managers who used to mentor early-career engineers are being cut or stretched across larger teams. And the tasks that once served as training ground, such as the small bug fixes and incremental features, are increasingly being handed to AI tools instead.

For leadership purely concerned with cost, the assumption is compelling: AI will keep improving, so we’ll need fewer humans, so why invest in growing them? A LeadDev survey found that 54% of engineering leaders plan to hire fewer juniors, reasoning that AI enables seniors to handle more.

On a balance sheet, the short-term economics make sense. But there’s an incredibly important question that needs to be addressed: what happens to the skills that juniors used to develop along the way, if AI is taking them and there are fewer juniors to do them in the first place?

The answer matters because much of the judgment of an experienced craftsperson comes through being in the details, making mistakes, and learning from them.

Consider judgment under ambiguity, organisational navigation, the instinct for when a shortcut will come back to bite you and when it’s the right call; these aren’t skills you acquire by reading documentation or even prompting AI for answers, but through years of making mistakes in environments where the stakes were low enough to fail safely.

There’s a term that comes to mind for how one progresses from junior to senior: scar tissue. The scars come from shipping something that broke in production and staying up to fix it, from proposing an architecture that didn’t scale and having to rebuild it, from navigating a difficult stakeholder relationship and learning, the hard way, what actually works.

AI can answer questions in the same way that revising for an exam can help you memorise an answer, but it can’t give you the scars that you can then apply to new problems in the future.

So what happens if we continue down this path? There are a number of possibilities that could all prove true to some degree. Analysing them, and where they could lead, is one of the ways we can steer towards a future that makes sense for the industry.

Three possible futures

The talent crunch. We’ve seen a version of this before. During the pandemic hiring boom of 2021-2022, tech job postings more than doubled and salaries hit record highs. Top candidates fielded multiple offers; poaching was rampant. Companies that weren’t seen as desirable places to work found themselves outbid by those that were.

Now imagine that dynamic, but worse. Senior engineers don’t appear from nowhere: they’re the juniors you hired five or ten years ago who learned, failed, recovered, and grew.

Cut the pipeline today, and the shortage doesn’t show up immediately. Instead it shows up in 2035, when the industry finds itself desperate for engineers with the scar tissue that comes from years of real-world experience. When the critical system goes down at 3am, there’s no one left who knows how it works. The parallels to other industries that neglected their pipelines, such as nursing and skilled trades, become impossible to ignore.

The bifurcation. Instead of a shortage, a split emerges. On one side: vibe coders who move fast, shipping features by orchestrating AI tools, comfortable with velocity but shallow on fundamentals. On the other: engineers who understand how things actually work, but who are increasingly rare and expensive.

The middle disappears. A HackerRank advisory board described this as a “hollowed-out career ladder”: seniors at the top, AI handling the grunt work, and very few people learning in between. The traditional path from junior to mid-level to senior breaks down because the rungs in the middle are gone.

This pattern shows up elsewhere. Economists have documented job polarization for decades: automation eliminates middle-skill routine work while high-skill and low-skill jobs grow. The same dynamic appears in wealth distribution, where the middle class has steadily shrunk across developed economies. In retail, analysts call it the barbell economy: luxury and discount thrive while the mid-market hollows out. Software engineering may be next.

The tinkerer’s precedent. There’s a more optimistic reading, grounded in computing history. Formal computer science education didn’t exist until 1962, when Purdue established the first department. The first undergraduate degree followed in 1967. Yet computing advanced for decades before that, driven by aforementioned autodidacts who taught themselves from manuals, blueprints, and tinkering: messing around for curiosity’s sake.

Each new abstraction layer was supposed to be the end of entry-level programming. Before software existed at all, “computers” were humans doing calculations by hand. Then came machine code, then assembly, then BASIC: created at Dartmouth for liberal arts students, it came pre-installed on home computers and enabled a generation of bedroom coders.

The browser brought JavaScript, which democratised web development. Cloud computing abstracted away infrastructure. Each time, instead of eliminating entry points, the new layer created different ones. And with each step up, we wouldn’t wish to go back: would you want to write everything in assembly today? I thought not.

If AI follows the same pattern, we could see a world where everyone becomes part product manager, part engineer, part designer. Building software gets faster, and the bottleneck shifts elsewhere: to sales, to go-to-market, to the hard work of finding customers and convincing them to pay. The constraint moves, but it doesn’t disappear.

There’s an alternative version of this future: the number of software engineers a company needs seriously dwindles, teams of fifty become teams of five, and average company size shrinks. The engineers who thrive are the entrepreneurial ones, those who can ship whole products rather than just features. Everyone else competes for fewer and fewer seats.

What makes this scenario different from the first two is that it still offers a path. Each earlier abstraction layer required you to understand programming logic; you were just expressing it at a higher level. AI potentially abstracts away programming itself: describe what you want and let the model figure out how. That could mean the ultimate democratisation, or the elimination of the ladder entirely. But if history is any guide, new entry points will emerge, even if we can’t yet see what they look like.

What this means for you

Which scenario plays out, and in what combination, depends partly on choices being made right now: not just by industry leaders, but by individuals at every level, including yourself.

We still have years ahead in which these effects will compound, and that means years in which deliberate choices can steer us toward a better future than the one we might waltz into without thinking. Here’s what you can do depending on where you sit.

If you’re a senior engineer, your expertise is becoming scarcer, not more common. This is an extremely strong position to be in, especially if you’re investing heavily in AI-first engineering skills. The scar tissue you’ve accumulated, the instinct for where the edge cases hide, the ability to debug systems you’ve never seen before: these take years to develop and can’t be downloaded. Three things you can do:

  • Mentor actively. Find a junior and invest in their growth, even if nobody’s asking you to. This is actually more fun than it used to be: you’re exploring AI tooling together, learning alongside each other rather than just handing down wisdom.
  • Make your knowledge visible. Use AI to generate excellent documentation and diagrams, and use it to level up the knowledge you transfer to the rest of the organisation. The tribal knowledge in your head is worth far more when it’s shared, and the barrier to doing so has never been lower.
  • Invest in engineering-led initiatives. The increased velocity you get through AI has to go somewhere. Instead of just shipping more features, use that time to work on the problems that often get deprioritised: performance, latency, resilience, and the kind of deep technical work that builds lasting competitive advantage.

If you’re a manager, the juniors you develop aren’t a cost centre; they’re a strategic bet. Every engineer you grow into a capable mid-level or senior is one you won’t have to poach from a competitor when the talent crunch bites. Remember the 2021-2022 hiring war? It could look mild by comparison. Three things you can do:

  • Make the case for junior hiring. Frame it as risk mitigation, not charity. Show leadership the cost of senior attrition, the salary inflation in the market, and what happens when institutional knowledge walks out the door. Juniors are cheaper to hire and, with good mentorship, can become your most loyal senior engineers.
  • Rethink what training looks like. The tasks that used to be training ground are being absorbed by AI. So create new ones: pair juniors with seniors on complex problems, give them ownership of small but real projects, let them lead incident retrospectives. OpenAI is experimenting with a “super senior + super junior” model for exactly this reason. The goal is scar tissue, not busywork.
  • Invest in your own internal tooling. AI has made building custom tools almost as easy as building a spreadsheet. Instead of waiting for engineering to prioritise your visibility needs or settling for off-the-shelf software that doesn’t quite fit, build the tools yourself. Whether it’s a planning dashboard, a bragdoc generator, or a way to track knowledge distribution across the team, the friction between “I wish I had a tool for this” and actually having it has never been lower. I covered this in detail in Just build the tools yourself.

If you’re early in your career, the path is harder than it used to be, but it’s not closed. The key is to seek out the experiences that build judgment, even when the system isn’t handing them to you. Three things you can do:

  • Seek scar tissue deliberately. Volunteer for on-call rotations. Take the messy migration project nobody else wants. When something breaks, be the one who stays to understand why. None of this is new: the tooling changes and the level of abstraction shifts, but the engineers who learn the most have always been the ones actively seeking scar tissue.
  • Don’t outsource your understanding. AI tools are incredible accelerators, but they can also become a crutch. When the model gives you an answer, take the time to understand why it works. Read the documentation. Trace the code path. The goal isn’t to reject AI; it’s to use it without losing the ability to think for yourself. I wrote about this in Use it or lose it.
  • Connect your work to the business. The engineers who get trusted with bigger problems are the ones who understand why those problems matter. Learn what your company’s metrics are, how your team contributes to them, and what keeps your leadership up at night. Then be proactive about getting closer to that work. Ask to be included in architecture discussions, request to shadow incident response, and when you get feedback, ask “why does this matter?” not just “what should I change?” Technical skill gets you in the door; business context gets you to the table.

If you’re a senior leader planning to spend more on AI than on people, you’re making a bet, whether you’ve articulated it or not. The bet is that AI will improve faster than your talent base depreciates: that you can cut junior hiring today and still have the senior engineers you need when it matters. That may be true. But if it isn’t, you won’t know until it’s too late to fix. Three things you can do:

  • Stress-test your assumptions. What happens if AI progress plateaus for a few years? What happens if your most experienced engineers leave? What does your team look like in 2035 if you hire no juniors between now and then? Run the scenarios. The answers might surprise you. Why not use AI to do the modelling?
  • Treat junior hiring as R&D, not overhead. The return on investment isn’t immediate, but it’s real. Every junior you develop into a senior is institutional knowledge you don’t lose when someone resigns. And some of the best junior talent right now, especially those who are AI-first, are incredible at their jobs, hungry to learn, and waiting for the right opportunity to come along. Frame junior hiring as investment, not cost, in your planning.
  • Measure your knowledge concentration. How many people on your team can debug your most critical systems? What’s your bus factor on key services? If the answer is “one or two,” you have a fragile organisation, regardless of how productive AI makes those individuals. Track knowledge distribution the way you track uptime.

Wrapping up

Who will be the senior engineers of 2035? We don’t know yet. We’re running an experiment on the industry’s talent pipeline, and the results won’t be in for years.

But here’s what we do know: the outcome isn’t predetermined. The senior engineers of 2035 are being made right now, in the decisions about who gets hired, who gets mentored, and who gets the chance to fail safely.

Until next time.

Slow down to speed up

Leave a comment
Growth

This month, we’re going to explore what might be the most counterintuitive practice in the age of AI: knowing when to slow down.

Hang on, slow down? Yes, bear with me here.

Let’s bring the debate that’s probably been going on in your team for some time to a head. Some of your colleagues say that with AI we should be building and shipping even faster, prototyping in hours, and that perhaps we don’t even need to write code at all any more: we should just let the models go full auto. They’re just that good now.

Others worry that this speed is creating quality problems, that we’re accumulating technical debt faster than we can pay it down, and that codebases are becoming patchworks of AI slop that nobody fully understands.

So, who is actually right?

To an extent, both are right, but I believe they’re talking past each other. The question isn’t whether to use AI for speed. It’s when.

We’ll start by looking at this debate through the lens of Daniel Kahneman’s System 1 and System 2 thinking, and why AI has made the slow phases of work more important, not less. Then we’ll examine the illusion of speed: the thought process around rework costs, and why going fast in the wrong phase means going slow overall.

We’ll explore when deliberate slowness pays off, including using AI itself for the slow work, and how fast prototyping is actually a form of slowing down. And finally, we’ll grapple with a question that’s getting harder to answer: why are you taking so long?

This article builds on themes from recent months. If you’d like to dig deeper, here are some related reads from the archive:

  • One bottleneck at a time introduces the idea of subordination: telling the fast parts of a system to slow down so the constraint can catch up.
  • Use it or lose it covers the thinking first protocol, a practical approach to slowing down before offloading work to AI, ensuring that critical skills aren’t lost.
  • Invert, always invert explores pre-mortems and backward thinking, both examples of deliberate slowness in action.

Let’s get going.

Two speeds of thought

There’s a useful way to frame the debate that we opened with. In his oft-cited Thinking, Fast and Slow, Daniel Kahneman describes two modes of thinking: one that’s fast, automatic, and pattern-matching, and another that’s slow, deliberate, and analytical.

Transpose this thinking onto LLMs: in his conversation with Dwarkesh Patel, Andrej Karpathy describes them as ghosts or spirits, a kind of statistical distillation of human text, ethereal entities that are fully digital and mimicking humans. Words go in, patterns get matched, and words come out, which is, if you think about it, essentially, System 1 thinking.

AI is extraordinarily good at this kind of work: fast pattern-matching at scale. But the second kind of thinking, the work of deciding what to build, why it matters, and whether we’re solving the right problem, still requires human judgment.

And here’s the counterintuitive and highly interesting part: AI didn’t make the slow phases less important, it made them more important. When execution is cheap and fast, the leverage shifts to the decisions that precede it.

A wrong requirement, a misunderstood problem, a flawed design assumption: these propagate through everything AI helps you build, only now they propagate faster. The cost of getting System 2 wrong goes up precisely because System 1 has become so powerful.

If we want to go fast, we need to slow down first.

The illusion of speed

Back when I was doing my PhD, there was a common saying in academic circles: something like a few weeks in the lab will save you hours in the library. Software development has its own version: weeks of coding can save you hours of planning.

The reason it’s a joke, of course, is that both are backwards: the rush to start, the mounting realisation that something fundamental is wrong, and the painful rework that follows. I’ve certainly done many software projects where I wish I’d stopped and thought a little more before I rushed in. I can feel the cold flush coming back to me that you get when you stare at weeks of work that are completely wrong.

We have a clear intuition in software engineering that we should catch mistakes early, ideally in requirements or design, because the further a project moves on, the more expensive it is to fix them. Common sense can derive this without any research to back it up: a box diagram is easy to change, a misunderstood requirement less so, and a fundamentally flawed deployed architecture is a rewrite.

Therefore, here’s the problem: AI can help you create technical debt faster than ever! Oh no!

If the decisions that precede execution are flawed, AI will faithfully implement those flaws in a way that looks like fully featured code. Looks can often be deceiving, especially with powerful and confident models. It will generate thousands of lines of code based on a misunderstood requirement. It will happily build an elegant solution to the wrong problem.

The illusion of speed is that you’re making progress when you’re actually digging yourself into a deeper hole.

The answer isn’t to abandon speed, but to deploy it deliberately. We should only unleash AI’s pace when we’re confident it’s pointed in the right direction. Which raises the question: how do we know when that is?

When slowness pays off

The places where deliberate slowness pays off haven’t changed much, even as everything around them has accelerated. Requirements are still cheap to change when they’re just words on a page, and expensive when they’re deployed code serving real users. Design decisions are still easier to revise in a diagram than in a production system. AI didn’t alter this fundamental physics; it just increased the leverage of getting it right.

In a previous article, I called this the thinking first protocol: before offloading work to AI, spend time clarifying what you actually want. This isn’t unnecessary process; it’s the cheapest possible place to catch mistakes.

Here is the interesting paradox which shows the incredible usefulness of AI: the same tool that accelerates execution can also accelerate deliberation. Here are some practical ways to do this:

  • Clarify requirements before coding. Spend 10 minutes writing down the problem you’re solving, your success criteria, and your constraints before asking AI to generate anything. What does “done” look like? What’s out of scope? Then get AI to interrogate everything that you’ve written before generating.
  • Run a pre-mortem. Ask AI “What could go wrong with this approach?” before committing to a design. It will surface risks you hadn’t considered.
  • Invert the problem. Ask AI “What would make this project fail?” to expose hidden assumptions. I’ve written more about this technique in Invert, always invert.
  • Build a throwaway prototype. Use AI to create something in hours, show it to stakeholders, and validate your understanding before investing weeks. This is speed in service of slowness: you’re investing time upfront to learn.
  • Build scrappy internal tools. Before you spend money on real products, use AI to build your own rough versions first. You’ll learn what you actually need and what you don’t. If you’re a paid subscriber, last month’s article goes deeper into some of the tools I’ve built myself.
  • Surface edge cases early. Ask AI to generate edge cases and failure modes for your design before implementation begins. It’s far cheaper to handle them in a diagram than in production.

Of course, slowing down is easier said than done. Even if you’re convinced it’s the right approach, you’ll likely face resistance from those who see AI as a reason to speed up, not slow down.

The new cultural headwind

Given that AI is speeding things up so much, if you haven’t already been challenged on why something’s taking so long, you certainly will be soon.

“Can’t you just use AI?” is a new form of velocity pressure, and it’s particularly insidious because it conflates the appearance of productivity with actual throughput. Yes, AI can generate code in seconds. But generating code and solving the right problem are not the same thing.

So, what do you do?

  • Be explicit about which phase you’re in. If you’re in the slow phase, say so: explain that you’re clarifying requirements, thinking through edge cases, and making sure you’re solving the right problem.
  • Invite stakeholders to contribute. Their input is cheap to incorporate now and expensive later. Once you’re confident you’re pointed in the right direction, you can move fast.
  • Show your working. Share artefacts from the slow phase: requirements docs, design sketches, pre-mortem outputs. This makes the invisible work visible and builds confidence that you’re progressing, not stalling.
  • Timebox the slow phase. Give the slow phase a clear boundary: “We’ll spend two days clarifying requirements before we write any code.” This makes deliberate slowness feel intentional rather than open-ended.
  • Share what you’re learning. Send brief updates as you discover things: edge cases you hadn’t considered, assumptions that turned out to be wrong. This turns the slow phase into a visible stream of value.
  • Demonstrate quick wins. Build a throwaway prototype or mockup early to show stakeholders you can move fast when needed, buying you credibility for the slower, more deliberate work.

Interestingly, this maps nicely to the hill chart concept from Basecamp’s Shape Up methodology: the uphill climb is the slow phase of figuring things out, where uncertainty is high and you’re discovering what the work actually is; the downhill is the fast phase of execution, where the path is clear and you’re just getting it done.

This isn’t an excuse for delays; it’s a description of how good work actually gets done. The teams that ship fastest over the long term are often the ones that slow down at the right moments.

Your turn

This doesn’t have to wait for your next big project. You can apply this to every AI-assisted task you do. Before your next one, try this:

  • Spend 10 minutes writing down what problem you’re actually solving. What does success look like? What’s out of scope?
  • Before you start building, ask AI to run a pre-mortem on your approach. You might be surprised what it surfaces.
  • If the task is significant, consider building a throwaway prototype first, one you’re willing to delete, just to validate that you’re headed in the right direction.

Wrapping up

Speed and slowness aren’t opposites; they’re tools for different phases. AI is effective for both: fast execution when the direction is clear, and accelerated deliberation when it isn’t. The skill is knowing which phase you’re in and applying the right tempo.

As always, until next time.