Artwork
iconShare
 
Manage episode 518896861 series 2400655
Content provided by Phil McKinney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Phil McKinney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In August 2025, Polish researchers tested something nobody had thought to check: what happens to doctors' skills after they rely on AI assistance? The AI worked perfectly—catching problems during colonoscopies, flagging abnormalities faster than human eyes could. But when researchers pulled the AI away, the doctors' detection rates had dropped. They'd become less skilled at spotting problems on their own.

We're all making decisions like this right now. A solution fixes the immediate problem—but creates a second-order consequence that's harder to see and often more damaging than what we started with.

Research from Gartner shows that poor operational decisions cost companies upward of 3% of their annual profits. A company with $5 billion in revenue loses $150 million every year because managers solved first-order problems and created second-order disasters.

You see this pattern everywhere. A retail chain closes underperforming stores to cut costs—and ends up losing more money when loyal customers abandon the brand entirely. A daycare introduces a late pickup fee to discourage tardiness—and late pickups skyrocket because parents now feel they've paid for the privilege.

The skill that separates wise decision-makers from everyone else isn't speed. It's the ability to ask one simple question repeatedly: “And then what?”

What Second-Order Thinking Actually Means

First-order thinking asks: “What happens if I do this?”

Second-order thinking asks: “And then what? And then what after that?”

Most people stop at the first question. They see the immediate consequence and act. But every action creates a cascade of effects, and the second and third-order consequences are often the opposite of what we intended.

Think about social media platforms. First-order? They connect people across distances. Second-order? They fragment attention spans and fuel polarization.

The difference isn't about being cautious—it's about being thorough. In a world where business decisions come faster and with higher stakes than ever before, the ability to trace consequences forward through multiple levels isn't optional anymore.

Let me show you how.

How To Think in Consequences

Before we get into the specific strategies, here's what you need to understand: Second-order thinking isn't about predicting the future with certainty. It's about systematically considering possibilities that most people ignore.

The reason most people fail at this isn't lack of intelligence—it's that our brains evolved to focus on immediate threats and rewards. First-order thinking kept our ancestors alive. But in complex modern systems—businesses, markets, organizations—first-order thinking gets you killed.

The good news? This is a learnable skill. You don't need special training or advanced degrees. You need two things: a framework for mapping consequences, and a method for forcing yourself to actually use it.

Two strategies will stop your solutions from creating bigger problems:

Map How People Will Actually Respond – trace your decision through stakeholders, understand what you're actually incentivizing, and predict how the system adapts.

Run the “And Then What?” Drill – force yourself to see three moves ahead before you act, using a simple three-round questioning method.

Let's break down each one.

Strategy 1: Map How People Will Actually Respond

Here's the fundamental insight that separates good decision-makers from everyone else: People respond to what you reward, not what you intend.

When you make a decision, you're not just choosing an action—you're sending signals into a complex system of human beings who will interpret those signals, adapt their behavior, and create consequences you never imagined. Your job is to trace those adaptations before they happen.

This strategy has three components that work together:

First: Identify ALL Your Stakeholders

When considering a decision, list everyone it will affect directly and indirectly. Don't just think about your immediate team—think about:

  • Your customers (current and potential)
  • Your competitors (how will they respond?)
  • Your suppliers and partners
  • Your employees at different levels
  • Your investors or board
  • Regulatory bodies or industry watchdogs
  • Adjacent markets or ecosystems

Most executives stop after listing two or three obvious groups. The consequences you miss come from the stakeholders you forgot to consider.

Here's what research shows: Wharton professor Philip Tetlock spent two decades studying how well experts predict future events. His landmark finding? Even highly credentialed experts' predictions were only slightly better than random chance—barely better than a dart-throwing chimp.

But the real insight came when Tetlock discovered that certain people can forecast with exceptional accuracy. These “superforecasters” share one key trait: they relentlessly ask “And then what?” before making predictions. They don't just see the immediate effect. They trace the decision through the entire system.

The people making million-dollar decisions are operating blind beyond the first consequence. Our job is to see what they're missing.

Second: Understand What You're Actually Rewarding

This is where most decisions go wrong. You think you're incentivizing one behavior, but you're actually rewarding something completely different.

Here's the test: For each stakeholder, ask yourself: “What does this decision make easier, more profitable, or less risky for them?”

Quick example: Remember the daycare that introduced a late pickup fee to discourage tardiness? They thought they were incentivizing on-time pickup. But here's what they actually rewarded: guilt-free lateness. Parents who felt terrible about being late now had a clear price for that guilt. The fee didn't discourage the behavior—it legitimized it. Late pickups skyrocketed.

The daycare asked the wrong question. They asked: “What punishment will discourage lateness?” Instead, they should have asked: “What does a $5 fee actually incentivize?”

Another example: You add a performance metric to improve efficiency. First-order thinking says: “People will work more efficiently.” But what are you actually rewarding? Optimizing for the metric—often at the expense of things you didn't measure but actually matter more.

Sales quotas reward closing deals, not necessarily solving customer problems. Employee of the month awards reward visibility, not necessarily the best work. Quarterly earnings targets reward short-term thinking, not building long-term value.

When you rush a hiring decision to fill a role quickly, you're rewarding speed over quality. The second-order effect? Your team learns that urgency matters more than fit, and future hiring suffers.

The pattern: People don't follow the spirit of your policy—they follow the incentives. And they're incredibly creative at finding ways to game systems when the incentives misalign with the goals.

Third: Trace Each Response Forward

Now that you know who's affected and what you're incentivizing, trace how they'll respond—and then how the system responds to THEIR response.

This is where the stakeholder analysis and incentives analysis combine into real predictive power.

Example: When ride-sharing apps added surge pricing to solve driver shortages, here's how it played out:

First-order: More drivers show up when prices surge. Problem solved, right?

Second-order stakeholder responses:

  • Customers started waiting out surge periods, meaning fewer overall rides
  • Drivers started gaming the system—turning off their apps to create artificial shortages that triggered surges
  • Competitors without surge pricing captured price-sensitive customers
  • Media coverage made “surge pricing” synonymous with price gouging, damaging brand trust

Third-order systemic effects:

  • The solution trained customers to use the service less frequently
  • It taught drivers to manipulate the platform rather than respond to genuine demand
  • It created a PR vulnerability that regulators could exploit
  • The very mechanism designed to solve shortages created new shortages through gaming behavior

The original problem (driver shortages during peak times) was real. The first-order solution (higher prices attract more drivers) was economically sound. But nobody mapped how customers and drivers would actually respond to the incentives created by surge pricing.

The key insight: Complex systems don't just accept your decisions—they adapt to them. And those adaptations often work directly against your original intent.

Try it now: Pause this video for 30 seconds. Think of one decision your company made in the last year. Who were the stakeholders? How did they actually respond? Was it what you expected?

If their response surprised you—you just found a second-order effect you missed.

Strategy 2: Run the “And Then What?” Drill

Now you have a framework for thinking about consequences. But frameworks don't change behavior—practice does.

This is your daily practice method. Before any significant decision, literally ask yourself “And then what?” at least three times. Out loud. Make it awkward. Make it unavoidable.

Here's why this works: Your brain will naturally stop at the first answer. The question forces you to keep going. It's a cognitive override—a way to fight your brain's preference for first-order thinking.

The Three Rounds:

Round 1: Immediate Consequence State the obvious first-order effect. This should come easily.

“We'll discount our product by 20%.”

And then what?

“We'll attract more customers and gain market share.”

Round 2: Response and Adaptation Now apply Strategy 1. How will stakeholders respond? What are we actually incentivizing?

And then what?

“Competitors will match our discount to protect their market share. And customers will start expecting permanently lower prices—we've trained them that our regular price was inflated. Early adopters who paid full price feel cheated.”

Round 3: Systemic Effects Trace the second-order responses forward. What happens when multiple stakeholders adapt simultaneously?

And then what?

“We're now in a price war. Our margins erode across the entire product line. We can't fund innovation or customer service improvements. Competitors with deeper pockets can outlast us. We've commoditized our own product and destroyed the brand value that justified our original pricing. We're stuck in a race to the bottom.”

The pattern you're looking for: Are the third-order effects consistent with your goals, or do they undermine them?

Most people never get past Round 1. By forcing yourself to Round 3, you'll see patterns others miss.

Try it now: Think of a decision you're facing right now—any decision. Say out loud what happens first. Now say out loud: “And then what?” Answer it. Now say it again: “And then what?”

[5-second pause built into video]

Did Round 3 surprise you? If yes—you just found your blind spot.

Let Me Show You How This Actually Works

Let me walk you through a decision I faced as CTO at HP. We were under pressure to cut R&D spending by 15% to hit quarterly earnings targets.

Round 1: Immediate consequence. “We hit our quarterly numbers. Wall Street is happy. Stock price stays stable. The board is pleased.”

Round 2: Response and adaptation. And then what? “Our best researchers—the ones working on breakthrough projects with 3-5 year horizons—see the writing on the wall. They start looking at competitors who aren't cutting R&D. Meanwhile, the teams that survive shift focus to incremental improvements with shorter payback periods because that's what won't get cut next quarter.”

Round 3: Systemic effects. And then what? “Eighteen months later, our innovation pipeline is empty. We're selling the same products with minor tweaks while competitors who maintained R&D investment launch breakthrough products. We lose market leadership. Now we need to spend 3X what we saved just to catch up—but our best people are already gone.”

We fought that cut. We protected the long-term R&D. Some of those projects became billion-dollar product lines. But I watched other companies make that first-order decision and destroy their innovation capability.

That conversation took maybe five minutes. But it saved HP from years of playing catch-up.

Put This Into Practice Right Now

Take a decision you're facing this week—any decision with financial or operational implications.

Write down the decision at the top of a page. Be specific.

List three immediate consequences. These should come easily.

Take each consequence and ask “And then what?” twice. Write down both second-order and third-order effects.

Find which effect you hadn't considered. That's your blind spot.

Do this for one decision this week, and you'll start seeing consequences others don't. Make it a habit, and it becomes automatic—like a chess player who sees five moves ahead.

The Unfair Advantage

Right now, in your company, there are people who seem to always be one step ahead. They don't work longer hours. They're not more talented. But somehow, they avoid the disasters others walk into. They see opportunities others miss. They get promoted while others are fixing problems.

Here's their secret: While everyone else celebrates the first-order win, they're already managing the second-order consequences. While you're implementing the solution, they've already anticipated what breaks next.

That gap—between first-order thinking and second-order thinking—is the difference between running in place and actually advancing.

Your challenge: For the next 30 days, before every significant decision, ask “And then what?” three times out loud. Not in your head. Out loud. Make it awkward. Make it unavoidable.

Because the ones who rise aren't the fastest problem-solvers, they're the ones who solve problems that stay solved..

So … Start asking the question. Three times. Every decision.

The question isn't whether we have time to think this way. It's whether we can afford to keep making decisions that create bigger problems than they solve.

Your Thinking 101 Journey

The Thinking 101 series teaches how to think clearly in a world designed to confuse everyone—here's our journey so far:

In Episode 1, we exposed the thinking crisis—AI dependency is creating cognitive debt, and independent thinking has become the most valuable skill in the modern world.

In Episode 2, we learned to distinguish deductive certainty from inductive probability and stop treating patterns as proven facts.

In Episode 3, we discovered how to distinguish true causation from mere correlation—saving ourselves from solving the wrong problem perfectly.

In Episode 4, we learned how to harness the power of analogies while avoiding their traps—generating useful comparisons systematically and spotting false analogies that manipulate thinking.

In Episode 5, we mastered probabilistic thinking—how to make decisions with incomplete information and act wisely when nothing is guaranteed.

Today, in Episode 6, we learned how to stop our decisions from creating bigger problems—mapping how people actually respond to our decisions, understanding what we are truly incentivizing, and asking “And then what?” until we see patterns others miss.

Up next—Episode 7: “Proportional & Numerical Thinking—Understanding Scale and Magnitude.” We will learn how to think in terms of scale, ratios, and relative magnitude—understanding when numbers matter and when they don't, spotting statistical tricks used to mislead, and developing intuition about large numbers that most people lack.

Hit that subscribe button so you don't miss future episodes. Also—hit the like and notification bell. It helps with the algorithm so others see our content. Why not share this video with a colleague who you think would benefit from it?

Because right now, while you've been watching this, someone just made a decision that solves today's problem perfectly—and just created three bigger problems for next quarter. The only question is: will you be the one who sees them coming?

To learn more about second-order thinking, listen to this week's show: Second-Order Thinking: How to Stop Your Decisions From Creating Bigger Problems.

Get the tools to fuel your innovation journey → Innovation.Tools https://innovation.tools

RELATED: Subscribe To The Newsletter and Killer Innovations Podcast


SOURCES CITED IN THIS EPISODE

  1. Cost of Poor Operational Decisions
    Rathindran, R. (2018, December 20). Gartner Says Bad Financial Decisions by Managers Cost Firms More Than 3 Percent of Profits. Gartner Press Release.
    https://www.gartner.com/en/newsroom/press-releases/2018-12-20-gartner-says-bad-financial-decisions-by-managers-cost-firms-more-than-3-percent-of-profits
  2. Expert Forecasting Accuracy and Second-Order Thinking
    Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown Publishers.
  3. AI Impact on Medical Diagnostic Skills
    Romańczyk, M., et al. (2025). Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study. Lancet Gastroenterology & Hepatology. As reported by NPR Health News, August 19, 2025.
    https://www.npr.org/sections/shots-health-news/2025/08/19/nx-s1-5506292/doctors-ai-artificial-intelligence-dependent-colonoscopy
  4. Unintended Consequences of Incentive Systems
    Merton, R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894-904.
  5. Second-Order Effects in Economics
    Henderson, D. R. (2018). Unintended consequences. In The Concise Encyclopedia of Economics. Library of Economics and Liberty.
    https://www.econlib.org/library/Enc/UnintendedConsequences.html

ADDITIONAL READING

On Second-Order Thinking and Decision-Making

Marks, H. (2011). The Most Important Thing: Uncommon Sense for the Thoughtful Investor. Columbia University Press.

Dalio, R. (2017). Principles: Life and Work. Simon & Schuster.

Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown Publishers.

On Systems Thinking and Consequences

Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.

Senge, P. M. (1990). The Fifth Discipline: The Art & Practice of The Learning Organization. Currency.

On Incentives and Unintended Effects

Levitt, S. D., & Dubner, S. J. (2005). Freakonomics: A Rogue Economist Explores the Hidden Side of Everything. William Morrow.

Munger, C. T. (1995). The Psychology of Human Misjudgment. Speech presented at Harvard Law School.

Note: All sources cited in this episode have been accessed and verified as of November 2025.

  continue reading

275 episodes