Six Reasons Why Managers Make Poor Decisions
This is a long one! You may wish to go get a cup of coffee before reading. Better yet, bring the whole coffee pot!
Recently, a friend and I sat talking at a café. We talked about Science-Fiction, Fantasy, a bit of geology, and eventually, we began talking about management. Specifically, my friend asked me if I have any idea why managers so often make very bad decisions.
After pushing my management rant button, my friend sat back, and watched me go off on a long lecture, waving my arms, talking a bit too loud, and much too intensely for a café environment. Luckily, we sat in a corner, so I do not believe I made too much damage to the reputation of the place. (There used to be a café in Gothenburg, Café Sirius, where passionate discussions about odd topics were part of the normal entertainment. I miss that place!)
Enough rambling! This article is a somewhat consolidated, and tidied up version of my impromptu lecture. I gave six different reasons, from six different perspectives, why managers make poor decisions. Here are the perspectives:
- Bounded Rationality
- Satisficing
- Double-Loop Learning
- The OODA Loop and Maneuver Warfare
- Politics
- Neuroscience
I’ll go through each one. You will notice there is quite a bit of conceptual overlap. Nevertheless, each one of the items in the list above provides a lens you may be able to use to understand why there are so many bad management decisions.
If you are lucky, you may even be able to, occasionally, use the insights to improve your own decisions a bit. If you are extremely dedicated, and willing to take personal risks, you may even be able to influence your organization, so decision making is improved at an organizational level.
Bounded Rationality
What we know and understand is often much less than we need to know and understand in order to make a good decision.
The idea of bounded rationality is from economics. It was originally proposed by Nobel Prize laureate Herbert A. Simon in his 1947 book Administrative Behavior. Before Simon’s book, economists hade a rather naive view of human economic decision making, homo economicus.
According to the homo economicus view, humans are perfectly rational, and all knowing, beings who always make decisions that maximize utility as a consumer, and profit as a producer. Homo economicus is also capable of arbitrarily complex reasoning and calculations, in order to further its goal.
Simon had never met a homo economicus. Upon doing research, he found that homo economicus was no more real than unicorns, the Loch Ness monster, or bigfoot. Possibly even less real.
What he found instead, were human beings who are sometimes very clever, and sometimes bumbles and fumbles, and spill their morning coffee in their laps.
He found that people sometimes pay exorbitantly for things with very low utility, and sometimes refuse a bargain. He found people who made disastrous financial decisions, as well as brilliant decisions. Not uncommonly, a person could appear to be a genius in one situation, and a hopeless moron in another.
Simon came up with two ideas that explained this erratic behavior, bounded rationality, and satisficing. We’ll deal with bounded rationality in this section, and satisficing in the next.
Bounded Rationality is the idea that human rationality is limited by:
- the difficulty of the problem that requires a decision
- the cognitive ability of the people making the decision
- the time available to make the decision
These constraints can push us to make sub-optimal decisions, even if we are willing to make the effort to make good decisions, which is by no means a given. See the aforementioned Satisficing below.
Here is an example of bounded rationality from my personal experience:
I worked as a Scrum Master and team leader in a large software development program using SAFe. At a Planning Interval (PI) planning event, the Solution Train Engineer (STE), repeatedly complained that the teams were not predictable enough.
Aging Analysis makes it possible to prioritize work items by age. This reduces wait time of Work-In-Process, and thus, also cycle time.
I decided to do something about it. There are lots of things that can be done to make software delivery more predictable. Most require time and training, not just training the teams, but also training management. That was just plain out, so I went for a simpler solution:
I made an aging analysis of all items we were currently working on. This made it possible to prioritize by age, which, in turn, makes it possible to reduce cycle time. This narrows the distribution of cycle times, which makes the entire development process more predictable.
I told the Solution Train Engineer and the Release Train Engineer (RTE) about it, and they immediately told me to stop wasting my time on nonsense.
Neither of them could see any link between what I did, and what the STE had told us to do, so they made a quick, (according to me, not according to them,) completely irrational decision. That is bounded rationality.
A short time thereafter, I was approached by the RTE of another part of the same program. He asked me if I could show him what I had done. I did, and he asked me if he could get a copy of a presentation I had made on the topic, and a prototype I had made.
I gave them to him, and he developed a BI application that could help his part of the program reduce cycle times, and thus also reduce cost and increase predictability.
The difference in the quality of decision making, I believe, comes down to very different boundaries of rationality.
From a bounded rationality perspective, what can we do to improve decision making? Well, we have three factors that shape the boundaries of our rationality, so let’s examine each one, to see if we can generate a few ideas.
BTW, in the following sections, I do list methods and tools I use to expand the boundaries of my own rationality. This is (mostly) not to show off. The purpose is to show you that there are ways to improve decision making. There are plenty of other methods than the ones I mention, so you may settle on a completely different set of methods and tools. That’s fine! Anything goes, as long as it works!
Difficulty of the problem
One way to deal with large, intractable problems, is to break them down, look at the parts, and see if you can fix one of them. Once you have solved a part, you can look at the problem again, and see if you can find more parts you can solve.
“The teams are not predictable enough” is a large, intractable problem. First of all, there were many teams, and I could influence only one of them, so I focused on that.
That reduced the size of the problem by about 95%. Yes, it may also have reduced the effectiveness of the solution by 95%, but that is not a reason to give up. A 5% solution is a good start.
Low predictability in output from a software development team has many causes. Here are a few, but by no means all:
- External dependecies
- Quality of the codebase
- Skill level of the developers
- Infrastructure (CI/CD pipeline, etc.)
- Development practices
- Management practices
- Type of process (pull vs. push, etc.)
- Prioritization model
- Queues
- Organizational structure
- Team structure
…and so on.
Most of these were difficult to influence, would take time to influence, or were a combination of both difficult and time consuming.
However, to influence the prioritization model, I could do most of the work myself. I also knew that if I could change the prioritization model, I could reduce the queues of Work-In-Process (WIP). If I could reduce WIP, the cycle times would also be reduced. That would reduce variation, and reducing variation would improve predictability.
Focusing on Work-In-Process meant I would have to convince the developers, but not the Product Owner, or (at this stage) the RTE and STE.
I had analyzed the probability distributions of work items, so I knew we were dealing with a long tail distribution that was stretched out quite a bit. It was so stretched out that it was likely that some of the work had been abandoned, or forgotten, while in development.
I did an aging analysis, which told me which work items that were delayed the most. Then, I worked on convincing the team to prioritize them, so they could be marked off as done.
Consistently focusing on the oldest items in process, will, over time, reduce the average cycle time. Reduced cycle time, means reduced variation. The long tail in the probability distribution is shortened, and that means less variation, and thus increased predictability.
So, I had reduced the problem to something I considered manageable. The mistake I made, was that I believed the solution would be explainable, just because I understood it. That brings us to the next major factor determining the boundaries of rationality.
Cognitive ability of the people making the decision
Cognitive ability is our ability to acquire, process, transform, store, and otherwise manipulate information. Part of this is innate ability, but trainable skills are extremely important. Many of those skills fall into the following categories. Along with each category, I’ll mention some of the skills and tools I use:
- Decision making — The cognitive process resulting in selecting a belief or course of action, among a number of alternatives. The process can be rational, or irrational, in varying degrees.
There are plenty of frameworks and methods for aiding decision making, for example Evaporating Cloud from The Logical Thinking Process, and options thinking from Lean Software Development, an agile software development methodology. The Cynefin framework is a well known, and very useful, decision making framework.
One of the simplest ways to aid decision making, and problem solving (below), is to bring in more people. This can get messy quick, if you do not have a good method to use, but if you do, you can get large groups of people to work together very smoothly.
Methods I use for this are Crawford Slip Brainstorming, good for working with up to 500 people, The Logical Thinking Process, and very simple Heat Mapping. Large groups can be split into smaller groups, and work separately. Their results can then be consolidated to determine a final decision, or a set of solutions, whichever is desired. - Problem solving — The process of achieving a goal by overcoming obstacles. There are several categories of problems, and these may require different sets of problem solving skills.
I have a whole library of problem solving methods, but tend to rely on Crawford Slip Brainstorming, The Logical Thinking Process, queuing theory, statistical methods, systems archetypes, Battle Maps, Relationship Mapping, and a few other tools.
Sometimes, I invent new tools when I need them. Battle Maps is one example, They have begun to spread, and you can now find material about them that has been created by other people than me. - Metacognition — A prerequisite for self-reflection (below). Awareness of ones own thought processes, and patterns of thought. For example, a manager may become aware that they are using a deterministic mode of thinking, and switch to a probabilistic mode, if that provides better insight into how to solve a problem. Lack of metacognitive skill can completely block the ability to make good decisions.
Personally, I have a collection of different paradigms I can use to understand my own, and other people’s ways of thinking. These include, but are not limited to, Lean, Theory of Constraints, Statistical Process Control, Taylorism (Scientific Management), Systems Thinking, various Agile frameworks, toolkits, and methodologies, and Cynefin.
It is worth noting that Scientific Management above is shock full of well intentioned mistakes. It is the default mode of operation for many companies, so understanding it helps identify many problems, and their causes. - Introspection — A prerequisite for self-reflection (below). The examination of one’s own thoughts and experiences.
I draw diagrams and write things down, as a way to examine my own thoughts. sometimes, I just have a cup of coffee and think for a bit. Writing this blog post is to a large part an exercise in introspection. It allows me to examine my own ideas, and maybe learn a few new things while I am doing it. - Self-reflection — The ability to see, evaluate, and understand my own thought processes. Self-reflection relies on metacognition and introspection (above).
- Literacy — The ability to read and write. I do read a lot. I also write, mostly presentations and blog posts, but occasionally books. Reading is an excellent way to learn new things. Writing allows me to organize and structure ideas, recruit other people to test them, and spread them around when I am certain they work.
- Logical reasoning — A mental process designed to arrive at a conclusion in a rigorous way. Logical reasoning starts with a set of premises, and applies rules of logic to reach conclusions.
I use sets of rules about logic reasoning, such as the Categories of Logical Reservation from The Logical Thinking Process, Anthony Weston’s A Rulebook for Arguments, and Karl Poppers ideas about demarcation and falsification.
Perhaps most importantly, I visualize problems by drawing them. This makes it easier for me to think things through. When I work with others, it also makes it easier for them to think things true.
A part of the reasoning process is crash testing! When I have an idea, I try to find flaws in it. When I work with others, and have a draft of an idea, we stop, and I tell the entire group to try their best to shred the idea to pieces. Most people are not used to this, but they can usually see the benefits. It is much cheaper to find flaws in an argument at an early stage, than it is to find it later.
I am pretty sure I have a lot of room for improvement in the logical reasoning department. - Abstract thinking — The process of taking specific examples, and deriving general rules and concepts from them.
I draw from my experience with object oriented programming. Queuing theory is a set of abstractions about the behavior of queues, systems archetypes is a set of abstractions about the behavior of social systems, including companies. Bounded rationality is an abstraction based on studies of human behavior. I use a lot of abstractions, but I do keep in mind that there is plenty of local variation. - Critical thinking — The process of analyzing underlaying facts, evidence, observations and conclusions, and determining whether they are correct, and what their consequences are. I use the tools listed under logical thinking. I also bring in other people, to bring perspectives and knowledge other than my own to bear.
The methods I use range from simple discussions, often with a whiteboard, or pen and paper, to Crawford Slip Brainstorming, and other methods for collectively thinking about a problem. - Mental arithmetic — Arithmetical calculations without the help of paper and pencil, a calculator, or a computer.
I often do quick calculations in my head to see if an idea is reasonable. I prefer to cheat though, and use calculators and computers when they are available. Excel is my friend.
Do not treat the above as a normative, or exhaustive list. The list tends to vary a bit depending on who you ask, and possibly when you ask. It should serve as a starting point for further investigation into the topic of cognitive ability though.
The time available to make the decision
Most management decisions have a time constraint that limits how much information we can gather about the problem we are trying to solve, and how long we can work on a solution.
There are a couple of ways we can tackle time constraints. A simple one is to reschedule. Just move the due date of the decision.
In Lean, there is the idea of the last responsible moment. Wait as long as possible, but no longer, before making the decision. This will enable you to gather and process as much information as possible before you decide.
Another route, is to use heuristics to make quick decisions that are good, even though they are not optimal. A heuristic is a rule that enables quick problem solving. For example:
- When a process step is overloaded, or stands still, look at the closest upstream process step to find the problem. Repeat if necessary. Value Stream Mapping is a really handy tool here.
- When trying to determine duration or cost of a project, take the outside view! Instead of breaking the project down into parts, and estimating them, look at similar projects, and check how long they took, what they cost, what percentage succeeded, how they failed, etc. I use Reference Class Forecasting, and sometimes Monte Carlo Simulation.
- When trying to reduce the lead time of a process, prefer reducing queues over increasing throughput. Practical ways to do this, are setting Work-In-Process limits, kanban, Drum-Buffer-Rope, and CONWIP.
A third route is to buy yourself more time by parallelizing processes. Blitz Planning, creating a Work Breakdown Structure, Critical Path, and Critical Chain, are all useful. In the cases of Critical Path and Critical Chain, you may wish to replace their estimation and schedule calculation methods with Reference Class Forecasting and/or Monte Carlo Simulation.
Maneuver Warfare has useful principles and ideas about dealing with time constraints, and creating them for opponents. I won’t delve into details here.
Of course, you can expand the boundaries of your rationality using tools different from the ones I use. The most common mistake I see managers make, is trying to make decisions about complicated, or complex problems without any useful tools at all.
Of all the tools and methods I have listed above, if I had to pick one, it would be to involve people with as many different perspectives as I can, and make the best possible use of their competence, knowledge, and skills.
Satisficing
Herbert Simon’s second big idea was that most of the time, people make decisions by searching through available alternatives until they reach an acceptability threshold. At that point the decision is made, even if it does not maximize any specific objective. Simon called this satisficing, from satisfy and suffice.
Simon might have been able to find a better name, but satisficing was good enough to reach his acceptability threshold. (Mine too. Despite the quip, I like it! It satisfices me.)
The satisficing theory has two models for how to make decisions: heuristic satisficing, and optimization.
Heuristic satisficing
Heuristic satisficing is what people normally use when choosing between different paths of action. It basically works like this:
- Set an aspiration level A.
- Choose the first option that meets or exceeds A.
- If the option has not satisfied A after a time, let’s call it T, then change A by an amount dA.
- Repeat until a satisfactory option is achieved.
While not entirely bad, heuristic satisficing can easily lead to poor decision making.
Story time again. I’ll give you two examples of the effects of satisficing, one large scale, one affecting individuals:
A company used a Waterfall development methodology in the early 90’s. This caused problems with long lead times, high cost, missed deadlines, poor software quality, and busted budgets. (I won’t go into all the reasons for why Waterfall causes this. I have written plenty about it, and so have many others.)
By the late 90’s, the organization decided that Waterfall did not reach their aspiration level, so they looked for something else. They found The Rational Unified Process (RUP).
The company enthusiastically bought into the RUP methodology, and bought a database with all the RUP practices, and tools designed to work with RUP. In practice, they ran RUP projects much like they ran Waterfall projects. The changes were mainly in terminology.
They completely missed that RUP is a toolkit you can use to build a methodology. They never built the competence necessary to create functioning methodologies using RUP.
Around 2005, the the organization decided that RUP did not reach their aspiration level, so they looked for something else. The first thing they found was Scrum.
The company enthusiastically bought into the Scrum framework, and bought tools designed to work with Scrum. They also sent people on two day courses, so they could become Scrum Masters, and Product Owners.
In practice, they ran Scrum projects much like they ran RUP projects. The changes were mainly in terminology, with one exception: They began working in increments, called Sprints in Scrum, more consistently than before.
However, they missed that Scrum is only a framework you can build a methodology around. Scrum itself is not intended to be a functioning methodology. They never built the competence necessary to create functioning methodologies based on Scrum.
Around 2020, the the organization decided that Scrum did not reach their aspiration level, so they looked for something else. The first thing they found was Scaled Agile Framework (SAFe).
The company enthusiastically bought into the SAFe framework, and bought tools designed to work with SAFe. They also sent people on four day courses, so the could become Release Train Engineers, and Solution Train Engineers. SAFe is a wrapper for Scrum teams, so they continued to use the Scrum framework, still without developing working methodologies.
The company embarked on a major transformation program to implement SAFe everywhere. To do this, they used a transformation methodology provided as part of SAFe. This transformation methodology, is a phased Waterfall methodology.
At first, everything looked good, but implementing SAFe using Waterfall turned out to be slow, expensive, and failure prone. Also, the SAFe implementation itself had many of the characteristics of the Waterfall process they started with more than twenty years earlier, and thus also had many of the accompanying problems.
Around 2023, the company gave up on SAFe, and decided to use Waterfall. At this time, the institutional memory of their first disastrous attempt to use Waterfall was long gone.
Also, they never saw the irony in going Waterfall, because their Waterfall-based attempt to implement SAFe failed.
At no point did the company seriously investigate what software development related problems it had, and explore a palette of different solutions. They always went straight for what they considered to be popular at the time. There were a few local exceptions, where managers and developers did really good work figuring out very good ways of working, but those initiatives did not spread to the organization at large.
As far as I know, at the organizational level, the organization still does not understand the differences between methodologies, frameworks, and toolkits, and the economic consequences of not understanding those differences.
Nor did they consider whether different parts of the organization had different problems that required different local solutions. Instead, they assumed, without any evidence, that if everyone worked the same way, everything would be just fine.
Most people in a position to make important decisions, followed the heuristic above, with results that were much less than optimal.
One of the problems with heuristic satisficing, is that the method has little or no memory. You can get into loops where you make a series of decisions that will never lead you to a good decision.
This does not just affect decisions made in large organizations. We find problems caused by satisficing everywhere.
Gear Acqusition Syndrome is a heuristic satisficing variant where photographers keep buying new gear in order to become better photographers. It does not work, because the problem is lack of skill, not bad gear.
The second example is about satisficing on an individual level:
I am an amateur photographer, and amateur photographers, especially beginners, are often afflicted by GAS, Gear Acquisition Syndrome.
Photographer culture overemphasizes the importance of gear: cameras, lenses, and many, many different kinds of accessories.
What separates a photographer who takes great pictures from those who don’t, is skill, pure and simple. Gear matters to an extent, but much less than many photographers believe. It is possible to take great pictures with very basic gear.
Instead of building skill, which takes time and effort, many photographers try to get better pictures by buying better cameras, better lenses, and various accessories.
When such a photographer has bought a new toy, they are happy for awhile, but eventually they notice that their photos have not improved. They then try to solve the problem by buying new gear, and get trapped in an infinite heuristic satisficing cycle, which never stops because the gear is not the problem!
Over time, a whole industry of Youtube photographers have been created, which reviews and promotes gear, and this feeds the problem.
Optimization
Satisficing can lead to a slow downwards trajectory that leads an organization to a slow, painful death. If you want to avoid that, what’s the cure?
First of all, set your aspiration levels higher. If you aim for the stars, you may at least hit the treetops. When you have done that, iterate!
You will suck when you start, and that is okay! Sucking really badly at something, is a prerequisite for getting good at it.
You can do this at a personal level.
Here is an example from my photography hobby. While I have chosen photography as an example here, you can do the same thing with anything you are interested in, for example management:
I am a good amateur photographer today, because I set my aspiration level high. When I set out, I wanted to be as good as the best photographers in the world, people like Joe McNally, Gregory Heisler, and Arnold Newman. (If you are interested in photography, you know who they are.)
I didn’t want fame, or fortune, but I did want to create photos I could be proud of.
The first year I trained myself in photography, I shot more than 8,000 pictures. That is, I made more than 8,000 iterations. Only one picture reached my aspiration level, and I shot that one more or less by mistake. The second year, I shot 14,000 photos, and got maybe five shots that reached my aspiration level.
The third year, I began to be able to come back with four or five good photos each time I went out to shoot.
That’s when I raised my aspiration level again. I realized I would not be able to go much further on my own, so I started a network for photographers, with frequent meet-ups, where we trained photography, had coffee, and talked about photography.
I got a few models interested in the meet-ups. They had fun, told their friends, and more models joined. That made more photographers interested, and the number of memberships skyrocketed.
We reached more than 600 members in two years.
For me, this wasn’t just about photography, I also got my own management laboratory, where I could try out various management ideas. One of the things I set out to explore was self-determination theory, about how knowledge workers get motivated, and conversely, demotivated.
One advantage of having a large membership, was that I could find other people who also wanted to set their aspiration levels high.
I pushed my aspiration level higher again, and put together a team for a large project, a photo comic. I wrote the script for a horror adventure, and we set to work.
It failed miserably.
We regrouped, tried again, and failed again.
At that point, I took stock of what had happened, made an analysis, figured out countermeasures, and made new plans.
I talked to the people who had been most interested, told them about the mistakes I had made, and how I wanted to fix them.
We tried again, and succeeded! A Rift in Time, a time travel photo comic, with time travel and dinosaurs, was published in 2015, and is still available to buy in some places.
After that, I continued to push my aspiration levels higher, which unfortunately meant I could no longer focus on the photography network. The tings I wanted to do were not interesting at all to most of the photographers in the network. For one thing, I began mixing photography and 3D, so I could create pictures that would have been way to expensive to do with physical sets.
Gaining skills and pushing my aspiration levels had a large impact on which artists I looked to for inspiration. I gradually shifted my attention to William Mortensen, James Gurney, Jamie Chase, Patrick Jones, Greg Broadmore, Jack Kirby, Frank Frazetta, and quite a few more. It is worth noting that only one of these is, or was, a photographer, William Mortensen. The others are painters and comic book artists.
There were a few other photographers that also had very high aspiration levels, but they had different interests than me, so while we are still friends, we don’t do large projects together anymore. However, we occasionally do things that are smaller in scope, but technically complicated, and, I hope, have artistic value.
At this point, I was following my own path, and was developing my own techniques and approaches. And yes, in case you are wondering, you can do the same thing with management skills!
If you are interested in building optimizing behaviors, in yourself, and in others, you may find a good competence model useful as a guide. My favorite, is Shu Ha Ri.
I won’t go into detail about Shu Ha Ri here, I’ll just mention it, because it is one of those things you may find useful on your journey to make better decisions. As with all the other models, methods, tools, and techniques I mention in this article, I drop the names for the benefit of those who are curious enough to investigate themselves.
While I hesitate to call myself a great, or even an exceptionally good photographer, technically, I have, in a sense, reached the Ri level in the Shu Ha Ri competence system, because I have learned from masters, and detached to create my own path.
The Dreyfus Skills Acquisition model comes in two flavours, five levels, and six levels. The six level variant is less common, but accounts for mastery roughly corresponding to the Ri level in Shu Ha Ri.
If you check out Shu Ha Ri, you might also want to check out the Dreyfus Skills Acquisition model. If you do, note that the five level version of the Dreyfus model does not have anything corresponding to the Ri level in Shu Ha Ri. However, the six level version of the Dreyfus model does.
At the low end, Shu in the Shu Ha Ri model roughly corresponds to Dreyfus level two. Dreyfus level two also happens to be where most people get stuck in their professional life, mostly due to lack of training.
Summing up optimization:
If you want to be optimizing instead of just satisficing: set aspiration levels high; iterate to learn; carefully pick people who you want to emulate, and get new paragons as you grow; learn with others if you can; use a learning model as a mental framework for learning, but do have more than one, to keep from becoming too rigidly attached to a single one.
All of this leads us to yet another way to look at learning and decision-making. Read on!
Double-Loop Learning
Chris Argyris was an American professor at both the Yale School of Management, and the Harvard Business School. He had quite useful ideas about why and how people, and organizations learn, and don’t learn, to make good decisions.
According to Argyris’ Double-Loop Learning model, people, and organizations, learn in two ways. The most common way is Single-Loop Learning.
Single-Loop Learning, according to Chris Argyris.
- We have a mental model of how something works.
- Based on the mental model, we have a set of decision-making rules.
- We make decisions according to the decision-making rules, and take an action based on the decision.
- The real world responds to our action.
- We get feedback. That is, we observe the response.
- Based on the feedback, we make a new decision.
It is important to note that with single-loop learning, our mental model never changes. Because the mental model remains the same, the decision rules also stay the same.
This causes problems if the mental model is out of step with reality.
One example is photographers with Gear Acquisition Syndrome (GAS), described above in the section about heuristic satisficing. The same problem can affect managers, and entire organizations. A couple of brief examples:
- An organization had extreme delivery problems, and had not been able to deliver software to customers for two years due to code quality problems. The quality problems were much older than that, but the situation had steadily grown worse over time. The responsible manager thought the problems were due to “the youth of today”. This shifted the responsibility for the problems from the manager, and, in his own mind, absolved him from any responsibility. He had stuck to this belief for about fifteen years.
- An organization defined quality solely as “conformance to specifications”. This is a definition that was made obsolete by the Total Quality Management movement in the 1980’s. The quality problems the organization had, was at least partially because the organization had not updated its mental model of quality assurance for more than forty years.
Organizations, and managers, that do not update their mental models inevitably go obsolete. The build-up is often slow, and can go unnoticed, until there is a sudden catastrophic break.
Neither people, nor organizations, are doomed to obsolescence due to learning deficiencies though. There is a way out: Double-Loop Learning.
Chris Argyris model of both single and double-loop learning.
In Double-Loop Learning, the feedback does not just affect future decisions, it also affects the mental model. when the mental model changes, so does the decision-making rules.
Double-Loop Learning allows people and organizations to break out of traps where they keep repeating the same mistakes. A couple of examples:
- A manager realizes that no matter how his software and hardware development teams change the way they make time estimates, they never get better. The manager decides to explore a new paradigm, the outside view, a probabilistic approach to making time and cost prognoses. The manager learns Reference Class Forecasting, and teaches it to his teams.
- A software development team uses Scrum. It pushes improving flow as far as it can, and is then stuck. To get to the next level, it has to change from a pure flow paradigm for software development, to a flow plus clean code paradigm. The team trains itself to use Extreme Programming, in combination with techniques from other agile methodologies and toolkits, to reduce code complexity, even out flow, and create a short feedback loop from the users to the team.
According to Argyris, there is one thing that prevents Double-Loop Learning more than any other:
Put simply, because many professionals are almost always successful at what they do, they rarely experience failure. And because they have rarely failed, they have never learned how to learn from failure. So whenever their single-loop learning strategies go wrong, they become defensive, screen out criticism, and put the ‘‘blame’’ on anyone and everyone but themselves. In short, their ability to learn shuts down precisely at the moment they need it the most.
— Teaching Smart People How to Learn, Harvard Business Review, 1991, by Chris Argyris
Again, according to Argyris, there is a way to solve the problem:
Companies can learn how to resolve the learning dilemma. What it takes is to make the ways managers and employees reason about their behavior a focus of organizational learning and continuous improvement programs. Teaching people how to reason about their behavior in new and more effective ways breaks down the defenses that block learning.
— Teaching Smart People How to Learn, Harvard Business Review, 1991, by Chris Argyris
So, according to Argyris, the solution is to break through the defenses against learning by teaching them to reason about their own behavior.
I believe Argyris is partially right. Most of us, including me, have a tendency to get defensive when we fail, or are not able to solve a problem. When we get defensive, we fail to acknowledge, even to ourselves, that we need to learn something new.
Personally, I like to tackle the problem head on. When I work with a team, whether it is managers, software developers, or some other kind of team, I tell them straight out that sooner or later, I will make a mistake. I tell them to call me out on it as early as possible.
While it is a bit embarrassing to be called out, it is much less embarrassing to be called out early, than to discover late in the game that the fantabulous idea I had wasn’t so great after all.
This also helps create a safe environment. If I dare to admit my mistakes, then it is okay for others too, to admit to making mistakes.
I also point out that most of the time, the value of the information we gain from a mistake, is worth much more than the cost of making the mistake. Quite often, there is a net profit, especially if we design experiments, so we can make small, cheap mistakes that yield a lot of useful information.
I have found that most people understand, and like, this approach.
However, there is a common situation I believe Argyris has missed, or at least not emphasized enough:
Sometimes, very smart people make bad decisions because the knowledge they need to make good decisions is missing!
This brings us to the next section.
The OODA Loop and Maneuver Warfare
The OODA loop is a decision model created by Colonel John Boyd, US Air Force. This is a slightly extended version, modified by Peter Hermann and Henrik Mårtensson.
The weird illustration above is the OODA loop, a model of how humans make decisions, both good and bad. This version is slightly extended from the original by Colonel John Boyd. Peter Hermann and I, independently of each other, added one more element to the Orientation step in the loop: Close Relationships. Peter was ahead of me, and published first, but we had been thinking along the same lines.
The OODA Loop is part of a military strategic framework, Maneuver Warfare, by Boyd. I did a pretty thorough study of the framework in 2008-2010, and wrote a management book that incorporated a civilian version of it. I have used it ever since.
One important difference between Argyris’ Double-Loop and the OODA loop, is that the mental model, called Orientation in the OODA loop, has been broken down into parts:
- Cultural traditions — The cultures we live in shape how we think. For example, I’m Swedish, so my views on gun rights, healthcare, taxes, politics, and many other topics, are different from say, most Americans. We are also shaped by the sub-cultures we belong to. For example, I worked in Theory Y based companies with strong knowledge cultures during my most formative years, so my views of how organizations should work are different from people who were shaped by Theory X organizations. I have also been shaped by the expert communities I have been part of, and by leisure communities, like the photography, Science-Fiction, and roleplaying communities I have belonged to.
- Analysis & Synthesis — How we analyze and synthesize information shapes the decision hypotheses to a large extent. I am a systems thinker, and use formal analysis and synthesis methods. That makes the decision hypotheses I form different from a person who is not. I may also form different hypotheses from a person who uses different methods for analyzing and synthesizing, even if the information we process is the same. Systems thinking is not applicable in all situations, so I also use other frameworks for thinking, such as complexity science, statistics, queuing theory, Theory Y, and self-determination theory.
- Previous Experiences — Our previous experiences have a big influence on how we interpret information. Experience can be enormously helpful, but experiences can also be misinterpreted, and this can lock us into patterns of making the same mistakes over and over again.
The effect of previous experience on our thinking can sometimes be so powerful it overrides other considerations. In other words, it forces you to do something stupid, even though you know it is stupid. The best way I know to combat this, is collective decision making. - New Information — New information allows us to make new decisions, or revisit and change old ones. New information can also change how we orient ourselves. That is, it can change our mental model. The OODA model covers how new information can affect both single and double-loop learning, though it is less explicit about it than the Double-Loop Learning model.
- Close relationships — This is an addition by Peter Hermann and me. Independently of each other, we noticed how close relationships, with friends, relatives, and team mates, can have a very strong influence on how we think, for better or worse.
Close relationships also covers such things as being trapped in social media bubbles. - Genetic heritage — Genetics plays a part in how we interpret information and form decision hypotheses. There have been studies on twins that show intelligence and rationality are heavily influenced by genetic factors. It is important to remember though, that there are plenty of things we can do to improve our decision making abilities, even if we didn’t hit a jackpot in the genetic lottery.
Do note the arrows named Implicit Guidance and Control!
The left arrow goes from Orient to Observe. What this implies, is that the way we process information influences what information we observe. This is important! We see what we expect to see, and tend to disregard what we don’t expect!
There is research supporting this. I recommend the book The Invisible Gorilla: How Our Intuition Misleads Us by Christopher Chabris and Daniel Simons to anyone who wants to get up to date on the research on how we perceive, and fail to perceive, reality.
An example:
I was working in a large project. The project went through recurring cycles of hiring and firing consultants. I fell afoul of one of the firing phases, so I was on my way out. When I had two weeks left, the manager of a department responsible for management and development processes asked me if I could have a look at the Key Performance Indicator (KPI) system management used to steer the project.
I said yes, and went to work. There were eighteen metrics in the KPI system. Of those, I found that six were not related to project performance at all. Another six could have been useful, but weren’t, because there was too much junk data in the database the KPI system used.
The remaining six KPIs were relevant. When I looked at historical trends, I could identify periodic build-ups of Work-In-Process, which made the project slow down. That meant deadlines were missed, which in turn caused cost overruns. When the budget was blown up, management responded by firing people.
Unfortunately, there was no effort to identify the Critical Path of the project, which meant firings could randomly affect it. The Critical Path is always there, regardless of whether we know it, or not.
Losing a single person on the Critical Path can have a heavy impact on even a very large project, so the firings sometimes slowed the project down even more.
When that happened, management began hiring people. Unfortunately, since no one kept track of the Critical Path, there was no guarantee that hiring people would reduce the lead time of the project. People hired off the Critical Path could not reduce lead time, they only contributed to build-up of Work-In-Process (WIP).
Worst case, that build-up of WIP can cause delays that shift the Critical Path, so the project takes even longer.
Yes, you can make a project later by adding people to it, as well as by removing people from it!
I put all of that in my report. I did not hear anything until a couple of weeks after I had left the project. Then, the manager who had asked for the analysis contacted me and told me the analysis was almost certainly correct.
Why did I see a problem the project management missed? In terms of the OODA loop, and the Maneuver Warfare framework, there were several important differences between how the project management and I oriented ourselves:
- Cultural traditions — The company had a consensus culture, and fitting in was, though an unspoken value, considered very important. Admitting that one did not understand something, like the KPI system, was likely to be interpreted as a sign of weakness. It was simply not done. The manager who asked me to look into the KPI system deviated from the norm, which took a lot of courage.
I, on the other hand, come from a culture obsessed with learning and understanding things, a knowledge culture. In such a culture, saying that you don’t understand something, is a sign that you are on the verge of learning something new. Mistakes are highly valued as opportunities to learn and improve.
I feel like Sherlock Holmes in a duel of wits with professor Moriarty when I get a juicy problem to work on.
- Analysis & Synthesis — I had many years of self-education and training in queuing theory, Lean, Theory of Constraints, Systems Thinking, statistics, and other useful stuff. This means I processed the KPIs very differently from the project management.
For one thing, I mapped the KPIs to a Goal Map, and reverse engineered what goals the KPIs were tracking, and how they were connected to each other. That is how I figured out that six of the KPIs were not connected to any project goals.
My knowledge of statistics includes such basic knowledge as “you need more than one data point to track a trend”. The designers of the KPI system apparently did not understand that. They used single data point gauges in situations where it was important to identify trends, which rendered some KPIs useless.
Management did not complain about it because, well, see the Cultural traditions item above. They could not admit when they did not understand something.
Satisficing probably played a part too. They could get by from day to day without understanding the big picture of where the project was heading, so they never thought there was information hiding in the KPI system that were worth bothering with. Looking at the KPIs became a magic ritual, severed from the much more strenuous activity of understanding the data, or finding flaws in the KPI system itself.
You can also look at it from a Single Loop/Double Loop learning perspective. Hiring and firing was the easiest way to solve their current problem, so they never saw a point in exploring better ways of running the project. - Previous Experiences — Having a metrics system is a great idea, but most corporate metrics systems I have seen, whether KPIs, OKRs, or something else, suffer from the same flaws:
- There is no underlying model! There is nothing connecting KPIs to goals, and behaviors, or connecting goals to each other. This makes it difficult to understand whether a metric is really relevant, the effects of changing it, and how to change it. Instead, each metric is treated as a separate thing, isolated from everything else.
- There is no appreciation of the effects of delays in the system.
- The data presentation is garbage! In most cases, you need to know the trend of a set of metrics in order to take appropriate actions based on them. Yet, most KPI and OKR systems present single data points, or sometimes two data points. This makes it impossible to separate random noise from real changes. You can’t even see the direction of a change.
There are more problems, but you get the idea: Good KPI and OKR systems are very rare! I doubt that the managers had even seen a working KPI system. They simply had no good reference to compare with.
I, on the other hand, had, thanks to the work I have done with Systems Thinking, Theory Of Constraints, and statistics. I’ve even written a book about it. A very short one, to be sure, but still…an advantage.
- New Information — We had the same information available, but because my orientation process was different, I was interested in digging through historical data in the project management system. The project managers were not, because there was nothing in their orientation process that made them sufficiently dissatisfied with the KPI system to make a thorough investigation.
- Close relationships — The manager who initiated the investigation and I had met through serendipity: We happened to sit down at the same table during a coffee break, and began talking with each other. It did not take long to discover we shared an interest in how processes work, including how KPI systems work.
- Genetic heritage — I’ll leave this one unexplored.
- Time — You can’t see it directly in the OODA loop diagram, but the Maneuver Warfare framework greatly values creating time for yourself to maneuver, while denying opponents time to do so. My work was winding down, because I was on my way to leave the project. This meant I had time to do a deep dive into the KPI system.
The managers, on the other hand, were very busy with their daily work, and this distracted them from considering things that were more important mid to long term.
It is worth noting that what made them so busy, was all the multi-tasking and WIP build-up in the project, the very problems that where hidden by the poor design of the KPI system.
Bring all this together, and it is pretty obvious that I reached different conclusions from everyone else because I had a different perspective, or rather, multiple different perspectives, from the project managers, and, at the time, much less busywork.
Maneuver Warfare itself is a synthesis of multiple perspectives on strategy and warfare. So is just about any other worthwhile invention, or solution to a complex problem.
Once again: It pays off to have people with many different perspectives available. Most companies are very bad at that. Diversity programs can help a bit, but most diversity programs are not nearly diverse enough.
You should be aware that what I have used in this section is a very small fraction of the Maneuver Warfare framework. I haven’t even shown you the most important bits, just some of the things directly applicable to the example I used. I do recommend you explore further.
Politics
Bad politics arise from a combination of factors, in individuals, or in-groups.
A long time ago, when I was still a Systems Architect/developer/technical writer, my friends and I sat working in the company barracks. It’s not as strange as it sounds. We were working at a large company, but they had hired people a bit faster than they could find, or build, new accommodations, so my team had been moved to temporary barracks.
This was a move that suited us very well, because it meant we could work without much oversight, and some of our methods were both unconventional, and quite loud… They were effective though, which our line manager appreciated.
On this particular day, our line manager ventured to the barracks, opened the door to the office where we sat, and said:
“Hi, guys! Are you getting anywhere with the project?”
We all shook our heads. There were a couple of murmured “No!”
“Don’t worry about it,” our manager said, “it’s not your fault! It’s me they want to get!”
Then, he left.
That was the moment I first understood the importance, and impact, of office politics.
Though the word “politics” has a lot of bad connotations, politics is not a bad thing in and of itself. Let’s look at what politics is, before we explore how politics can wreck lives, projects, companies, countries, and the world.
We will start by looking at definitions. There are lots of different definitions of the world, but I have picked four that are useful for the purpose of exploring bad management decisions:
Politics (from Ancient Greek πολιτικά (politiká) 'affairs of the cities') is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.
— Politics, Wikipedia article
The other three definitions are from the same source:
- the art or science of government
- the art or science concerned with guiding or influencing governmental policy
- the art or science concerned with winning and holding control over a government
— Merriam-Webster
There is nothing intrinsically bad in the above definitions. The third of the Merriam-Webster definitions gives us a hint about what can go wrong though:
c. the art or science concerned with winning and holding control over a government
Suppose a person puts his, or her, personal goals above the goals of the organization they work in, or if the organization itself has nefarious goals. Then politics becomes a vehicle for achieving those nefarious goals, without consideration for the well being of others.
An example:
I once worked in a project where the project manager and the chief architect collaborated to inflate the cost and duration of a project:
- The project manager wanted a line management job that was pretty high in the corporate hierarchy. In order to do this, she needed to show that she could manage a large group of people. Her solution was to artificially inflate the number of people needed in the project.
- The chief architect was also the owner of a consulting company that supplied personnel to the project. By inflating and prolonging the project, he could make more money.
Together, the project management and the chief architect reworked project schedules to inflate durations, and blocked solutions to various problems in the project.
It was astoundingly effective. The project was about ten times as large as it ought to have been, in terms of manpower. Progress was slow, and each partial delivery actually reduced the profitability of the company.
It worked out well for them both. The project manager got the line management job, and the chief architect got the extra money.
At least for different managers at the company knew of the scheme, and wanted to stop it. All four had left the company within a year, for various reasons.
A very common bad faith political tactic is to overestimate the benefits of a project, and underestimate the difficulties, in order to get a go decision for a project that should not be executed at all, or turn a low priority project into a high priority project.
If an individual, or in-group, has goals that conflict with and override the goals the larger system they are a part of, and if the individual or in-group has a high degree of political skill, we get bad poltics, that is, decisions that are bad for the system as a whole.
What can we do about it?
In a political environment, you have to counter bad politics with good politics. I won’t go into too much detail, but I’ll give you a small scale example of good politics, and why it helps to have a strategic framework to guide your actions.
In spite of the inferiority of your force, deliberately make your defensive line defenseless in order to confuse the enemy. In situations when the enemies are many and you are few, this tactic seems all the more intriguing.
— The Stratagem of the Open City Gates, The Thirty-Six Stratagems (ca 300 BC)
I had been contracted to lead a change project. Eight teams should transition to an Agile way of working. The organization had already had a team of Scrum coaches working on transitioning the teams to using Scrum, with mixed success.
Before I got the job, I told management that I did not intend to force the teams to work according to a common framework, or methodology. Instead, I wanted to work with the teams to analyze what their specific problems were, and help them work out their own solutions.
The management agreed to this. They understood very well that the teams worked with different things in different contexts, and had different problems. The teams, also consisted of people with different personalities, and different skill sets. Therefore, it was logical that different solutions would be appropriate.
The managers warned me that some of the teams, one in particular, had had a bad experience with the Scrum coaches that had worked on transitioning the teams to using Scrum. The managers made clear that they really liked the team, and the people in it, but they were worried that the reactions to a new Agile Coach could be…harsh.
I began by doing a walkabout to visit the teams, introduce myself, and tell them what I was there to do, and how I intended to go about it.
When I got to the team the managers had warned about, and told them I was the new Agile coach, one of the team members rose up, walked towards me, and said:
“I hate Scrum!”
Very clear and concise. No beating about the bush. I liked that! I answered:
“Well, I am not really fond of Scrum either.”
There was a pause, and then:
“Can you say that? You are the Agile coach. Aren’t you supposed to get us to work with Scrum?”
That gave me the opening I needed, and I told them I didn’t want to force them to do anything, but if they were interested, I would do my very best to help them identify the problems they had, dig down to root causes, work out solutions, and help them implement the solutions.
I told them that as I am way to lazy to work hard, they would need to do all of the difficult work, and make all of the difficult decisions. (Well, I lied about me not working hard, but I did intend to let them make the difficult decisions.)
I told them we would work with methods they had probably never heard of before, and that if they liked them, they could go ahead and steal them.
I also told them that if the whole thing went sideways, they would have ample opportunity to tease me about it.
In other words, I showed them that they would be in control, and that I was utterly defenseless, as per the Stratagem of the Open City Gates.
They talked it over, and quickly agreed the possibility of seeing me fall flat on my face was too fun an opportunity to pass up. Besides, my approach was the opposite of what they had expected, and was so weird it might actually work.
As it turned out, the team was a blast to work with! All of the teams were really good, and did great work. This particular team though, because they had a contrarian streak, and were not afraid to let it show, while they were also exceptionally skilled, knew the entire company very well, and were really nice people, were wonderful to work with.
I worked with them to do deep analysis and synthesis, well beyond the borders of their team, and department. We also crash tested ideas together, to find and fix flaws before the ideas were implemented.
The team was also excellent at generating solutions, and trying them out.
That commission stands out as one of the most fun and interesting in my whole career. There were many more things that were really great about the company and the people I worked with, but that team was a very important part of it.
So yes, politics can be used to achieve good things, as well as bad.
Some general recommendations:
If you want to change something, it is often better to think in terms of a political campaign than a company roll-out. Be wary of one-size fits all change programs.
Try to understand what the goals of the organization are, or depending on the situation, what they ought to be. Conversations, pen, and paper, are quite powerful tools for this. Somewhat more formally, I also use Goal Maps, from The Logical Thinking Tools. I often combine Goal Maps with Crawford Slip brainstorming, if I work with a group of people.
Also, pay attention to key people, and various in-groups that can be allies, or enemies. Don’t forget that if you can turn an opposing force into an ally, you have both reduced the friction, and gained power.
Battle Maps can be useful to get the lay of the land, identify key people, and groups, and understand power structures and goals of various parts of the organization. Relationship Maps can do pretty much the same thing.
Strategic frameworks, like Maneuver Warfare, 36 Stratagems, and the Art of War, can be used as guides and idea generators.
Finally, before you get so enthusiastic about politics that you get into trouble, if you intend to get deep into a corporate political game, do keep your CV up to date, and have a route for retreat prepared. As the 36 Stratagems say:
To avoid combat with a powerful enemy, the whole army should retreat and wait for the right time to advance again. This is not inconsistent with normal military principles.
— Sometimes Running Away Is the Best Strategy, The Thirty-Six Stratagems
Oh, and do remember that as a manager, or other employee, in a company, you have way more skin in the game than the consultant who advices you. If you take advice from a consultant like me, make sure the consultant is a bit worried for your sake.
That is all I have to say about politics this time around. Next, we will have a look at your brain, and mine.
Neuroscience
We are really bad at understanding how our own brains work, how we think, and what motivates us. However, the past few decades, we have gotten a powerful new tool that can peer inside our brains while it is working, and detect what’s going on in there: Neuroscience.
Neuroscience is a multidisciplinary science that aims to understands how the human brain and nervous system work. It combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling.
Neuroscience uses a lot of different tools. Some of the most interesting, enable direct studies of individual neurons in human brains, while the subjects of the study performs different tasks.
Neuroscience has a lot of interesting things to say about decision-making, but for the purposes of this article, we’ll focus on the following:
- The human brain is basically an enormously sophisticated pattern matching engine. We can think logically, but it is difficult for us to do so. Pattern matching is what we excel at.
- The human brain is designed to save energy as much as possible, and, most of the time, produce very quick results.
The result of these two properties, is that the brain uses a lazy evaluation pattern matching algorithm. This means, when it finds a pattern that matches, it usually stops searching for other patterns that match.
This has been an evolutionary advantage throughout most of our history. For example, it has enabled us to detect and evade predators, like sabre-tooth tigers, based on very small clues, like something vaguely ear-like sticking up behind a bush, or in tall grass.
It is a good survival tactic to act quickly, even if we do not know for certain it really is a sabre-tooth tiger. More thoughtful people, who went closer to get more information, tended to get eaten.
In the modern world, we have a greater need for evaluating more than one pattern before making a decision. The lazy evaluation pattern matching in our heads causes a lot of problems, from anti-vaccine movements, racism, and authoritarianism, to beliefs in Cost Accounting and OKRs.
In case you believe I am fibbing about the latter two:
Cost Accounting maintains that cost centers cost money, but does not generate profits. Easy to test: Close down the cost centers in your organization, and see what happens to the profits.
When you put it that way, it is easy to see there is something profoundly wrong with Cost Accounting. Critics have pointed problems like this out since at least 1921. The lazy pattern matching algorithm in our brains tends to ignore the dependencies “profit centers” have on “cost centers”. Treating them separately makes it easier for us, so that is what we do, even if it often leads to the wrong results, and bad decisions. Sometimes company wrecking level bad decisions.
OKR systems usually compare data from one quarter with data from the previous quarter. If you do that with the performance data in this diagram, it looks like you have performed well in four quarters, and performed badly in two quarters. In reality, there is no change in performance. All the change is due to statistic variation. The average performance of the system does not change at all. Most OKR systems would mislead you.
OKRs? I’m glad you asked! OKRs (Objectives and Key Results) is often mislabeled as a goal-setting framework. In reality, it is a target setting framework, with numerical targets, which is very different.
It is considered good practice to set quarterly OKRs so that the success rate is 70%. OKRs should be set so high they do not represent business as usual.
Let’s think a bit about what this means:
- The OKRs are subject to statistical variation. In order to set an OKR so that there is a 70% success rate, you need to understand the probability distribution of results over quite some time. There are techniques for doing this, like Monte Carlo Simulation, and Reference Class forecasting, but such tools are not a part of the OKR framework.
- Even if you have tools you can use to set targets with a 70% success rate, those tools work only if you have a stable system. This directly contradicts the idea that OKRs should not represent business as usual.
- There is nothing in the OKR framework that provides even a clue about how to improve system performance. Without such guidance, people can basically do two things:
- Work harder. This is unsustainable, because the next quarter, targets will be higher, and people will have to work even harder, until they leave, or burn out.
- Cheat. This will eventually create a culture where cheating and manipulating metrics, and other numbers, is the norm.
- We need to encourage divergent, and creative thinking in organizations. To do that, we need people with different backgrounds and experiences, but we also need people with different skill sets, and different ways of processing information.
- We need methods and tools that help us think better! This means we need to learn, and train, to use those methods. Different tools lead us down different paths of thought, and action, so we need a diverse set of tools, that enable us to use different paradigms of thought.
Note that I am not suggesting you should use AI. AI can make your thinking worse, not better. I am not saying you should not use AI either. I am saying you should be very cautious about how and when you use it. Try asking your favorite AI the following:
“Can relying on AI make humans less intelligent?”
Here are the answers I got from:- Google’s Gemini: Yes, excessive reliance on AI can diminish human intelligence by leading to cognitive offloading, where individuals stop practicing and developing their skills in critical thinking, memory, and problem-solving. Research suggests that this over-reliance can cause skill atrophy, reduce cognitive effort, and decrease neural activity in brain regions associated with creativity and attention.
- Microsoft’s Copilot: It can—but only if we let it dull our minds instead of sharpen them. The Risk: Mental Muscle Atrophy. When we outsource thinking to AI without engaging critically, we risk:
- Reduced problem-solving skills: If AI always gives the answer, we stop wrestling with the question.
- Weaker memory: Like relying on GPS for directions, we lose spatial awareness and recall.
- Diminished creativity: If we only accept AI’s suggestions, we stop generating our own ideas.
- Reduced problem-solving skills: If AI always gives the answer, we stop wrestling with the question.
- We need more collective decision-making! For example, one company I worked with, had a system with two managers for one department. The managers that worked together were very different from each other, and consequently could provide very different perspectives on how to organize work, and solve problems.
Very good agile software development teams use practices like pair-programming, or troikas, to improve decision-making.
Companies that are good at strategy, use collaborative processes that engage the entire organization. Toyota’s strategy deployment method, Hoshin Kanri, is a well known example.
Armies using Maneuver Warfare, for example the US Marine Corps, and the Swedish army, use Auftragstaktik, a way of making tactical decisions that involve both superior officers and subordinates in decision-making.
…and then I took a deep breath
When I got as far as that, I had to stop and take a breath, and my friend took the opportunity to tell me she had an urgent meeting she had to go to.
After she had fled, I thought through what we had talked about, and decided the topic could be worth writing a very brief blog post about…and here we are 11,759 words later.
- Google’s Gemini: Yes, excessive reliance on AI can diminish human intelligence by leading to cognitive offloading, where individuals stop practicing and developing their skills in critical thinking, memory, and problem-solving. Research suggests that this over-reliance can cause skill atrophy, reduce cognitive effort, and decrease neural activity in brain regions associated with creativity and attention.
You might want to check out Goodhart’s Law, usually expressed as:
When a measure becomes a target, it ceases to be a good measure.
Charles Goodhart was an economist, and he actually expressed himself a little bit differently:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes
— Problems of Monetary Management: The UK Experience, 1975, by Charles Goodhart
To top it off, I have not been able to find any research that says OKRs lead to improved performance of the things measured by the OKRs.
A lot has been written about OKRs, but everything I have found begins with the assumption that OKRs work, without any evidence, and proceeds from there, to describe how to implement it.
But, I digress. The point is that the lazy evaluation pattern matching view of how we think, provides yet another way to understand why we, too often, make poor decisions.
It also provides some insight into how we can make better decisions:
Comments