Use it or lose it

Leave a comment
Growth

This month, we’re going to continue exploring the mental models that Charlie Munger covered in Poor Charlie’s Almanack, a compendium of 11 talks given throughout his life. The book culminates in one final, bumper-sized talk that is a collection of over 20 mental models that he attributed to his success.

Last month’s article covered one of these models: “invert, always invert”, which is a method for thinking about problems by flipping them on their head and considering the worst that could happen, and then ensuring that you don’t ever make that the case. It turns out that doesn’t just work for investing but it also works well for the kinds of problems that we face in software engineering: designing resilient systems, planning rollouts, and ensuring that you have a backup plan in case something goes wrong.

This month, we’re going to look at another one of these mental models: “use it or lose it.” This mental model is becoming increasingly relevant as AI becomes more ubiquitous in our lives.

Here’s what we’re going to cover in the article:

  • We’ll start with an introduction to the use-it-or-lose-it model, looking at real examples ranging from playing an instrument to doing exercise.
  • Then we’ll cover how “use it or lose it” has affected managers for a long time, and how AI may be potentially compounding that effect.
  • We’ll then consider whether there are ways we can use AI more intentionally to potentially preserve our critical thinking skills.
  • And then, to finish, we’ll see how AI may actually be able to pave the way to us becoming broader generalists as managers, in a manner that supports our desire to be in the details and thrive in today’s flatter organizations.

So, as we begin to approach the end of 2025, let’s think about ways in which we can stay sharp in 2026 and beyond.

Practice maintains perfection

The tenet of use it or lose it is best summarized by the quote from the Polish pianist Ignacy Jan Paderewski: “If I miss one day’s practice, I notice it. If I miss two days, the critics notice it. If I miss three days, the audience notices it.”

In his 11th essay, Munger reflects, in his 80s, on what he remembers since school: effectively only that which he has used daily since then. In the world of investing, Charlie frequently used the basics of mathematics to real balance sheets, but all of the advanced calculus that he was once proficient in had long been forgotten.

A lack of daily practice leading to skill decay is a recurring theme for managers. One of the most common worries when moving into a managerial role is losing the hands-on coding skills that got them into the role in the first place.

As such, diligent managers invent ways to stay close to the details, such as pair programming, continuing code review, or carving out time to fix bugs or contribute to smaller features. However, as many of you know, this can be a challenge to maintain long term. Daily busywork work can become like gas: it expands to fill the size of the tank. As such, managers who interview for new roles often have to go into an intense period of studying to prepare for coding challenges.

However, although programming skills may diminish through underuse, managers do gain more opportunity to practice the other elements of their craft on a day to day basis, such as planning and strategy, coaching, and leading.

Accountability for one or more teams requires a continual sharpening of these skills, so the pre-AI manager could more comfortably let go of coding as it was replaced by the honing of others.

But, as more of us offload our strategic thinking and planning to AI (and I too have written plenty of material on methods to do this), there is a risk of these core cognitive managerial skills diminishing too, and as such, we need to be careful. We mustn’t become passengers in our own orgs.

The science of skill decay

We see use it or lose it in action after finishing formal education, and we also see it in exercise: cardiovascular fitness drops off rapidly if we don’t workout regularly, and muscles require regular training to maintain strength.

The same is true for our cognitive skills. A 2011 study published in Science on “The Google Effect” on memory showed that our brains optimise for where to find information, rather than the information itself. Similarly, a 2021 study found that increasing offloading behavior correlates with decreasing memory accuracy. Correlatively, a 2020 study in Nature contrasted how heavy GPS users show decline in spatial memory use, whilst London taxi drivers, i.e. those expected to have “the knowledge”, instead, show growth in that region.

Although it is too early for longterm studies on AI usage, a 2025 MIT Media Lab study showed that despite increased efficiency in students using ChatGPT compared to those that didn’t, that same group of students showed weaker neural connectivity and cognitive depth.

Interestingly, on the Dwarkesh Podcast, Andrej Karpathy explained that he programmed nanochat almost all by hand, with only some minimal tab-autocomplete as assistance. Although much of what he was building was “off the distribution curve” of what LLMs are trained on (since LLMs can be seen as a compression of the internet, and nanochat is novel work), he also stated that programming manually was the only way to truly learn, and that others wanting to learn from what he wrote should also follow the same hand-coding process.

Collating the themes above, we should be mindful of how we lean into AI as a thinking partner as managers and leaders. Although we may be hardwired to offload work and strive for efficiency, we need to be careful at not diminishing the core cognitive critical thinking skills that are at the heart of our craft. Losing our coding chops was bad enough; we don’t want to lose these skills too.

Inversion: designing an increasingly ineffective leader

Since we looked at the “invert always invert” maxim in the last article, perhaps we could call upon it here to design how we could become ineffective leaders in the current times.

So, considering what could be the worst that could happen, let’s think about how you could guarantee that you could become obsolete as a leader in 12 months:

  • Rule number one: Stop being close to the details of what you’re building; specifically, the code and the architecture that is being designed within your team. Fully delegate all of the decisions to your team, so that you rely on what people say to you in order to understand what’s going on. You pay them to do the work, so let them get on with it.
  • Rule number two: Do not use any of your time to stay on top of the latest AI developments. Instead, let your team tell you what the latest and greatest is, and don’t spend any time yourself trying out new models or tools. Why would you need to if you’re a manager?
  • Rule number three: Become a “reviewer” rather than a participant to all of the activity in your team. Let your team get on with things entirely autonomously, and simply review what is happening at meetings and also in performance reviews.
  • Rule number four: Do not regularly engage in any first principles or critical thinking, either with humans or AI. Either let your team tell you what needs to be done or fully offload decisions to AI that you have to make. After all, you have others to do the work for you.

The insight here, by looking at this through the lens of inversion, is that if you are doing all of these things, you are actively choosing obsolescence of your own impact. You are becoming a passenger, and we know that in the time of increased efficiency and flattened orgs, managers that are passengers aren’t going to last long.

Additionally, the other important insight, other than your immediate performance in your role, is that you are not engaging in any cognitively difficult activities. When you subtract practice of cognition through critical thinking and coding, then what actually is left? What is the purpose of your role, and perhaps more crucially, how do you think your skills are going to fare in the next three, five, and ten years if this is your default mode?

We need to think about how to avoid this path at all costs. And hopefully, by doing so, it will mean that not only will you be an effective manager, it will mean that there is nothing for you to lose, as per the central mental model of this article.

An antidote

Building on our inverted case, we need to offer an antidote that solves each of those four rules. This antidote should be such that it means that you can have opportunities to keep your coding skills sharp, which means you are close to the details and get to practise contributing to what your team is building in more depth, but it also gives you ample opportunity to sharpen your cognitive abilities, offloading the right work to AI.

Rule number one: a minimum effective dose of coding

We know that as managers we (mostly, there are some exceptions) cannot be on the critical path for shipping features. It typically ends up slowing the team down. However, that doesn’t mean we have to stop coding entirely. We just need to find the minimum effective dose to keep our skills alive.

There are a number of ways to do this. One is building internal tools, scripts, or prototypes that help the team but aren’t blocking production. For example, I built a brag doc generator that I could use to find the descriptions and comments from issues that I’d been contributing to on Linear, and then use a local LLM via Ollama to summarise it for me. This was a fun activity that only took me around an hour with Claude Code.

Another is pair programming with your team. This is a high-leverage activity that gets you close to the workflow (the friction of the tools, the build times, the environment quirks) without the pressure of delivering a feature solo. It keeps the “finger feel” of the craft real. It works even better remotely than it does in real life, and it helps you really see what your team is building.

And of course for those of you that love programming, there’s always the evenings and weekends for hobby projects.

Rule number two: get your hands dirty with AI

Do not outsource your understanding of frontier tools to others. Take an active and passionate interest in the tools that are reshaping our industry. As hopefully I’ve shown in previous articles that show how to improve your strategic thinking and decision-making with LLMs, AI is not just for engineers, it’s for everybody. I couldn’t work without it now. You too should become well-versed in the latest and greatest through personal hands-on experience.

Instead of just reading articles and opinions about what is best, find out for yourself. Schedule weekly “lab time” (even just one hour) to test new AI tools personally. Understand the different modes that they offer from prompting to agentic interfaces to the capabilities of the different models, and which ones work best for writing and coding and which IDE you prefer. Keep that preference up to date through constant trial and iteration. For example, what exactly is different between Antigravity and Cursor other than the fact that Antigravity has less available models? How different is Antigravity’s agent mode compared to Codex’s? Try them out and see.

Reclaim the joy of creation without the pressure of production. This is how you stay ahead of the curve, rather than having it explained to you. Additionally, when your team sees that you’re getting as stuck as they are, it only builds respect, and additionally, they’ll only be more likely to trust your judgement when it comes to deciding which tools the team should use and how.

Rule number three: become an active participant

Fight hard against the trope of the manager as the approver of all things. Move from being a “reviewer” to a “participant” in your team. Force yourself to ask at least one question on a PR or design doc that requires you to open the IDE, diagramming tool, or query the logs to answer.

I call this “diving down the stack.” The idea is that you try as a manager to go one or two levels deeper than your team would typically expect. It should surprise them. Again, this helps you build your confidence in what is coming out with your team (you see and care about the details) but also builds trust and respect with those that are building for you (“wow, they see and care about the details!”).

If you can’t quite find the time and space to dive down the stack technically regularly, then at least dive down the stack with logic and reasoning. There are many mental models that you can apply here to engage in productive conversations with your engineers that will improve the work that your team is doing and fundamentally ensure that you’re utilising your cognitive ability.

For example, in a previous article, the beauty of constraints, we outlined a number of techniques that you can actively wield to encourage your team to do less unnecessary work, whilst also ensuring that you are completely across the constraints, requirements and trade-offs of the work that you’re leading.

Becoming an active participant in your team sometimes means doing the things that don’t scale. However, doing the things that don’t scale is the way to be successful as a manager today. You need to understand the code that’s being shipped, the architectural decisions, the metrics, requirements, and the constraints. But the good news is, in terms of use it or lose it, if you are doing the things that don’t scale, you’re keeping yourself incredibly sharp at the same time.

Rule number four: the “thinking first” protocol

Finally, and importantly, we need to think about how we don’t offload all of our cognitive ability to AI. Whilst it’s true that there are no long-term studies about the effect of AI on our cognition, as seen by the orthogonal studies that are referenced at the top of the article, it would be wise to keep practising our cognition in the spirit of using it and not losing it.

To me, practising our cognition is about understanding what to offload and what not to offload. The way that I’ve been navigating this choice is as follows:

  • Is the task that I’m doing completely menial? Will executing and completing that task teach me nothing new? For example, tasks like cleaning data, summarising a document, or doing some research on the web where I would have had to click 40+ links on Google, and so on. If that is the case, then I can happily delegate that completely to AI and then just review the output.
  • If the task is not menial, then I always make sure that I take a first pass myself using my own cognition. This helps me arrive at an initial solution, and then I can use that as a base case to start interacting with AI as a way of critiquing and expanding my work. For example, if we needed to make some improvements to our career matrix, I wouldn’t simply dump them into a prompt and then ask it what I should do. Instead, I would take the time to read and review them myself first, come up with my own suggestions for improvement, and then use that as the starting point for prompting.

A great analogy here is working like a surgeon. Here, Geoffery Litt (referencing a theme covered in The Mythical Man-Month) argues that we should be the surgeon, making the key decisions and actions, and the AI should be the team of assistants, allowing us to delegate the grunt work.

Think about what it means to use AI like a surgeon in your own role, whether that be in coding, strategic thinking, or researching and understanding information in order to make decisions. This way you fight cognitive offloading, and instead only delegate the things that you would ideally want to automate in the first place.

Wrapping up (the presents)

We began this article by exploring the biological reality of “use it or lose it,” a principle that applies as much to our cognitive abilities as it does to our muscles. We’ve seen that there is a possibility that the “competence pacifier” of AI, if left unchecked, may accelerate the decay of the very skills that make us effective leaders: our critical thinking, our technical context, and our ability to reason from first principles.

However, the path forward isn’t to reject AI, but to use it with intent. By doing an inversion pass on what it means to be an ineffective leader, we can follow the antidotes we’ve outlined: finding your minimum effective dose of coding, getting your hands dirty with the tools, becoming an overly active participant in your team’s work, and adopting a “thinking first” protocol. By doing so, you can ensure that you are sharpening your mind rather than dulling it.

The action for you in 2026 is clear: be the surgeon, not the passenger. Engage your critical thinking explicitly before you prompt. Dive into the details. Do things as a manager that don’t scale. By doing so, you won’t just preserve your skills; you’ll evolve them, becoming a broader, deeper, and more effective leader.

We are at yet another turning point for managers. More is expected of us, and in the same way that the nature of programming is changing with AI, I also strongly believe that the nature of management is as well. However, the tools are there for you to use, but it’s up to you to make sure that you use them effectively and in a way that builds your skills, rather than dulling them.

Invert, always invert

comments 4
Growth

I recently finished Poor Charlie’s Almanack, a collection of eleven talks by Charlie Munger. When Stripe Press published a brand new edition of it with their usual beautiful type setting and cover design, I couldn’t resist.

For those unfamiliar to Charlie, he is worth getting to know. While his notoriety stemmed from being one of the greatest investors of his generation, he was also a prolific speaker and writer, and was an advocate of cross-disciplinary thinking and application of mental models. He excelled in taking ideas from mathematics, philosophy, and psychology in order to think about the world in a new way.

In this month’s article, we’re going to be looking at one of the mental models I learned from Charlie and how it can help us as engineering leaders think, plan, and execute better by avoiding failure.

Death by optimism

Software engineering is tough. No matter how hard we try and how hard we plan, we always end up missing simple things which cause a whole bunch of problems down the line.

This can range from missing features and functionality (“how did we not think of that?”), to edge cases that we haven’t thought about (“how did we not see that coming?”), to poorly conceived rollouts and launches (“why didn’t we check this didn’t work in Italy?”).

There seems to be an ability to repeatedly stumble on the same simple mistakes that suggests this is just part of human nature. We spend so much time thinking about the big picture, focusing only on the happy path that we’re traveling down, that we tend to overlook stupid mistakes that then shoot us in the foot.

The question, therefore, is why?

Why is it that we make the same mistakes over and over again? After reading Poor Charlie’s Almanack, I’ve come to think that it’s because we apply mental models that are far too optimistic to our planning. As a result, because we typically rely on an optimistic outlook, we fail to consider what could go wrong.

Countless software projects in the last twenty years have significantly underestimated time and complexity, have not done enough QA, have not considered their rollouts carefully enough, and haven’t scrutinized their scope to ensure that key features were missing.

There’s clearly a core bug in our thinking, because humans would have solved these planning problems a very long time ago if it didn’t exist.

What is inversion?

If it is the case that thinking too optimistically is one of the reasons that we keep getting things wrong in software engineering, can we instead use a pessimistic mental model? The answer, I believe, is yes, and the solution lies in one of Charlie Munger’s models called inversion.

Inversion is one of the cross-disciplinary mental models that I mentioned above that Charlie mentioned often. He would use the inversion mental model in order to scrutinize investments before he made them. When it comes to planning projects, estimating scope, and especially rolling out changes, inverting the problem can expose the blind spots optimism leaves behind. The principle of inversion is highly applicable to us as engineers.

As Charlie says: “Invert, always invert.” This is the way that you can save yourself from disaster.

The origin of inversion comes from the 19th-century German mathematician Carl Gustav Jacob Jacobi, who advocated for solving problems by approaching them backwards. Rather than trying to work something out directly, you assume the opposite and work toward a contradiction.

Munger adapts this into practical situations: to succeed at an outcome, you should invert it by thinking about what would have to happen for you to fail, and then completely avoid all of those things in order to succeed.

For example, if your goal is to keep your home clean, instead of thinking about what it means for it to be spotless, you can invert the problem by thinking about what it would mean for it to be disgusting, and then make sure you do everything possible to make it not disgusting.

It follows that for your home not to be disgusting, you would need to:

  • Clean your dishes within 24 hours.
  • Take out the bin when it’s full.
  • Vacuum the floor once a week.
  • Dust surfaces regularly.
  • Do your laundry when the laundry bin is full.
  • …and so on.

And it just turns out that by inverting the problem and then doing all the items above to avoid it being disgusting you arrive at the same outcome: you have a really clean house.

This is the beauty of inversion. Instead of asking yourself, “How do I succeed?” you ask, “How do I fail?” and then systematically avoid those failure modes.

Inversion for engineering teams

Engineering teams can greatly benefit from using the inversion approach when thinking about larger initiatives such as estimation, planning, and rollouts.

As we saw above, whenever we do these activities with default optimism—thinking about what would be nice to have and what we must try to achieve—we often forget the things we must avoid as part of that process, which is where the mistakes creep in.

For example, if we are thinking of rolling out a new feature gradually to our clients, instead of leading our thinking with cohorting and the desire to launch to everyone as quickly as possible (i.e. goal oriented), we should think about what it would mean for the rollout to be a complete disaster, identify those factors, and completely avoid them. Doing so will ensure that we avoid edge cases in our thinking that we may have missed previously.

This is best explained by example.

Let’s imagine that you’ve just made a significant upgrade to part of your application and you’re thinking about how to roll it out. Instead of asking yourself how the rollout could be a success, you could invert the problem and ask yourself how the rollout could disastrously fail.

You identify that it would fail if:

  • Bugs are present.
  • Customers are unable to opt out if they don’t like the experience.
  • Enterprise customers are caught off guard by the change and are not given adequate advance notice.
  • Workflows take more clicks or scrolls or text inputs than the previous workflow that was upgraded.
  • It looks worse, or is less intuitive, than the previous functionality that it replaced.
  • It is slower than the previous workflow to render.

I’m sure there’s other examples that you can think of here too.

If you can take that list of reasons that the rollout could fail, and then systematically work to put protections in place so you avoid them, then it would follow that you would have a successful rollout.

Doing an inversion pass

I’d like to propose that the next time you do a significant piece of work, you do an inversion pass. It will help you systematically identify failure modes and build your defenses before you embark on whatever you’re about to do.

Below is a template that you can copy and edit for your own needs.

Setup

Before we get going, we need to work out who’s doing what.

  • Begin by assigning roles. One person should be the facilitator who keeps time and the discussion flowing. There should also be somebody acting as a scribe that is able to capture the conversation.
  • Then define the scope. Decide what it is that we’re trying to invert, which could be a feature rollout, an infrastructure change, or an architectural decision, etc.
  • Make it clear that no idea is too pessimistic, and that today we are being paid to be cynics.

Inversion questions

With roles defined, work through these questions as a group.

The facilitator keeps the group moving and on time, and the scribe ensures that everything is captured. Questions that are in quotes are intended for the facilitator to ask the group.

  1. Catastrophic failure. “What would make this an absolute disaster?” If this was to cause a P1 incident, what could some of the likely root causes be? What kind of scenario would have happened in order for that to trigger? Which single component failure would cascade most dramatically?
  2. Silent degradation. “How could this fail without us knowing?” What metrics are we not monitoring that we should be? Which failure modes wouldn’t trigger our existing alerts? Where might we have blind spots in our observability and logging? What could degrade slowly enough that we wouldn’t notice until customers complained?
  3. Rollback. “What if we need to roll this back at 3am?” Can this change be reversed? How long would rollback take? Is there anything that’s irreversible? At what point does rolling back become more dangerous than rolling forward? What happens if none of this works?
  4. Load and scale. “What happens when real load exceeds our assumptions?” If you’ve currently estimated a certain load, what would break at 10x that load? Are there any resources that you have assumed will work properly that could go wrong? What kind of behavior exists in high traffic scenarios with extreme contention or queuing?
  5. Dependency failures. “What if everything that we depend on breaks?” List out all of the external services that you rely on, such as databases and APIs. For each of them, think about what could go wrong if they became slow or unavailable. Think about whether you should have retries or circuit breakers.
  6. Human error. “How could we break this ourselves?” Are there any operational steps that could be prone to human error? Do we have everything written down in playbooks in case whoever is on call doesn’t understand what to do, or are we missing documentation?
  7. Data integrity and security. “Is it possible for us to corrupt or lose data?” Have we thought about race conditions that could happen? Or have we assumed transactionality that doesn’t actually exist? What happens if we process the same event twice, or if we skip one event? How do we know if data becomes inconsistent? Are there any attack vectors that we need to think about? Which data are we exposing and to whom?

You may want to add or remove inversion questions depending on the kind of project that you’re doing.

Once you’ve captured your list, go through and mark each item as to whether it’s:

  • showstopper (which must be addressed before launch)
  • mitigation (which will need monitoring, fallbacks, or workarounds)
  • An accepted situation of which we are understanding the risk and moving forward regardless.

When you’ve got to this point, you should have list of actions captured by your scribe that you go and work on, plus a documented inversion pass outcome that proves you have done this exercise. You can use this to generate a risk register, update your design docs, and expand your documentation.

Try it yourself

Sometimes being pessimistic is good.

Using the principle of inversion, you can identify gaps in your planning and thinking, which can make your projects better, safer, and more resilient.

In your next project, try out an inversion pass. Run the exercise on your own or do it with your team and see whether it helps you feel more confident about what you’re going to be doing next.

Additionally, think about inversion in your own life. If you were to apply the inversion principle to how you manage your finances or what you want to achieve next year, could it potentially help you to think about these goals in a new light? Perhaps it could increase your confidence in getting them done to a high standard.

Remember: invert, always invert. If it worked for Charlie, it works for me.