Issue #5 APRIL 2026 *Expanded and revised from original 2025 short essay.

On Second Thought

Feed the Right Algorithm

The patterns you repeat are training your team on exactly what to show you — and what to hide.

Key Takeaways

Teams don't hide information maliciously — they learn over time what you reward and what you punish
The better your team gets at reading you, the more filtered your reality becomes
A leader's behavioral patterns are the algorithm; the team's information-sharing is the output
Resetting the algorithm requires changing the inputs, not just asking for honesty

Think about how a social media feed works. The platform watches what you pause on, what you click, what you skip. It doesn't ask what you want. It observes what you reward with your attention — and it feeds you more of that.

Over time, your feed becomes a precise reflection of your existing preferences, assumptions, and blind spots. Not reality. A curated version of it.

Your team is doing the same thing.

Not deliberately. Not cynically. But systematically. They are watching what happens when they bring you bad news versus good news. They are learning whether candor gets rewarded or penalized. They are noticing which questions you ask, which answers you linger on, and which ones visibly frustrate you. And they are quietly, continuously adjusting — not what they think, but what they tell you.

This is how filter bubbles form in organizations. Not through dishonesty. Through optimization.


How the Algorithm Gets Trained

Most leaders believe they want honesty. They say so. They often mean it. But what they communicate through behavior is frequently something more complicated.

When a leader reacts to unexpected problems with visible frustration, the team learns: surface problems only when you also have solutions. When a leader consistently gravitates toward confident voices and away from ambiguous ones, the team learns: project certainty, even when you don't feel it. When a leader repeatedly rewards people who validate existing plans and deprioritizes those who challenge them, the team learns: agreement is the price of access.

None of this requires a single dishonest person on your team. It only requires people who are paying attention — which is to say, everyone.

The better your team gets at reading you, the narrower your view of reality becomes. This isn't a failure of their candor. It's a consequence of your patterns.

Over time, the team's algorithm becomes extraordinarily accurate. They know which initiatives you'll champion and which you'll quietly deprioritize. They know how to frame a risk so it doesn't alarm you. They know which data to lead with and which to bury in an appendix.

And you, receiving information that has been pre-filtered to match your known preferences, feel increasingly confident that you have a clear picture of what's happening. You don't. You have a highly personalized one.

The consequences are predictable — and consequential.

What Gets Lost in the Filter

Strategic surprises arrive without warning because early signals were softened before they reached you.

Dissenting perspectives disappear from meetings because people have learned that challenge costs more than it contributes.

Creative tension flattens because the team self-edits toward consensus before the conversation begins.

Talented people who are most committed to honesty gradually disengage — or leave — because the organization's implicit incentives no longer reward what they have to offer.

What makes this especially difficult to diagnose is that everything can look fine on the surface. Meetings run smoothly. People are aligned. Decisions get made. The organization functions.

But functioning is not the same as thinking clearly. A team optimized for your comfort is not a team optimized for your effectiveness.

If everyone in the room consistently agrees with you, the most important question isn't whether you're right. It's what the room has learned about the cost of disagreeing.


Signs Your Algorithm Needs a Reset

Before looking at how to change the inputs, it helps to recognize the pattern. A few diagnostic signals:

  • You're rarely surprised by outcomes, but frequently surprised by problems that turn out to have existed for a while.
  • The same voices dominate in meetings, and the same voices stay quiet.
  • You ask for candid feedback and receive careful feedback.
  • New ideas tend to arrive pre-validated rather than genuinely exploratory.
  • When something goes wrong, the post-mortem reveals that several people saw it coming — but didn't say so.

These aren't signs of a dishonest team. They're signs of a well-trained algorithm.


Resetting the Algorithm

The algorithm changes when the inputs change. Here are four behavioral shifts that begin to reset it.

Behavioral Shifts

1

Reward the Messenger Make honesty the path of least resistance

When someone surfaces a problem, a risk, or an uncomfortable truth, your response in that moment is training the entire room — not just the person speaking. Thank them explicitly. Ask follow-up questions that signal genuine curiosity rather than damage control. If the news is bad, resist the urge to immediately pivot to solutions. Sitting with the problem, even briefly, signals that reality is welcome here.

2

Ask for what you're not hearing Name the filter out loud

In one-on-ones and team discussions, build a habit of asking: "What are we not talking about that we should be?" or "What's the version of this that I'm probably not seeing?" These questions signal that you know the algorithm exists — and that you're actively working against it. Over time, they create permission for people to surface what they've been editing out.

3

Seek out disconfirming voices Diversify the inputs

Notice whose perspective is missing from the room. Actively create access for people who are most likely to hold a different view — whether because of their role, their background, or their track record of dissent. Don't wait for them to speak up. The algorithm has usually taught them not to.

4

Examine your own reactions as data Watch what you're rewarding

After difficult conversations, ask yourself: did I make it easier or harder for that person to be honest with me again? Not whether you handled it gracefully — whether you made honesty feel safe. The most important feedback you'll receive about your algorithm isn't in what your team says. It's in the pattern of what they've stopped saying.

The goal isn't a team that tells you what you want to hear. It's a team that trusts you enough to tell you what you need to hear. That trust isn't built through a single conversation. It's built through a consistent pattern of rewarding it — one interaction at a time.

A First Step

This week, after one meeting or one-on-one, ask a single question before the conversation ends: "What's something relevant to this that you haven't said yet?"

Then stay quiet. Don't explain the question. Don't pre-answer it. Just wait.

What comes next — or doesn't — is your first signal about the algorithm you've been running.

The next reflection builds on this one. If our teams are learning to filter information based on our reactions — what happens when that same filtering dynamic applies not just to information, but to ethics? That's the question Issue #6 takes up.

Share This Article