Stop Building an AI Strategy
Start Building the Room
The meeting looked fine. The alignment was not. What leaders see in the room is rarely the whole story — the real conversation moved somewhere else a long time ago.
You've read 47 AI posts this week. Ever wondered why you can't stop?
It's not just you. We're all doing it. Scanning what other people are thinking, saying, seeing. Some doom and gloom. Some optimistic. Most somewhere in between. Many reading about it. Some even writing about it (hey there).
There's a name for this, and it's older than you'd think.
Festinger, 1954
In 1954, a psychologist named Leon Festinger published a theory that's quietly been describing human behavior ever since. He called it social comparison theory, and the core idea is simple. When we're uncertain about what to think, feel, or do, we look to other people as our yardstick.
The more uncertain the situation, the harder we look. We scan the room. We read faces. We check what our friends are posting. We bring it up at dinner to see how it lands. We don't usually notice we're doing it. It's just how we locate ourselves.
Right now, with AI, that dial is turned all the way up. And it's not just happening in your own head. It's happening on every team in your organization, all at once.
Teams are their own little ecosystems
Here's what most leaders miss about this moment.
Social comparison doesn't only happen person to person. It happens team to team. It happens inside teams. And the comparison happening inside your teams is happening in a language you may not fully speak, even if you're the one leading them.
Every team has its own context. They know how their work fits together. They know how their team fits into the org. They've built up shared shortcuts, shared language, shared assumptions about what's safe to say out loud and what isn't. That's a feature of good teams. It's the operating system that lets them move fast without re-explaining everything every week.
It's also why the AI conversation inside your team might look calm on the surface while a lot is actually happening underneath.
People are asking each other things like: Are you using it? How much? Did your manager say anything? Do you think this changes our roles? Is it weird that I like it? Is it weird that I don't? Most of these questions never reach you. They get answered inside the team, by the team, using whoever happens to be the loudest or most confident reference point.
That's social comparison doing its work. Your job as a leader isn't to stop it. You couldn't if you tried. Your job is to notice what's happening and shape the conditions it's happening under.
A couple of patterns are worth watching for right now.
Pattern one: the champions
Some people on your team are already in it. They've got an AI tab open while they work. They're experimenting on the weekends. They've built a workflow that saves them two hours a week and haven't told anyone because they're not sure if they're supposed to.
These are your champions. Your innovators. Your trendsetters. Every team has them, and sometimes they're who you'd expect, often they're not. The quiet analyst who's been prototyping prompts. The coordinator who's automated half their inbox. The person three seats away from the org chart's spotlight who turns out to be three months ahead of everyone else.
Find them. Talk to them. Ask what they're using, what's working, what isn't. Not because you need to copy their setup, but because they are already doing the social comparison work for your team. They are a reference point whether you've named them that way or not. And right now, the unnamed reference points are the ones carrying the most weight.
Once you surface your champions, two things happen. One, the rest of the team gets a legitimate, in-context example to compare against (instead of some LinkedIn influencer's take or a vendor demo). Two, the champions themselves get permission to keep going. Most of them have been waiting for it.
Pattern two: the indispensability play
Here's one that existed long before AI, but AI is going to make it more visible.
When people feel threatened, some of them respond by making their work look harder. More steps. More process. More meetings to explain the steps. More artifacts that prove the work happened. Not because the work requires it, but because looking indispensable feels safer than being efficient.
You've probably seen this before. It's the person who turns a twenty-minute task into a three-day project. The one whose process document grows a new section every quarter. The one who copies you on everything. This is usually not bad faith. It's a very human reaction to a very real fear.
AI is going to amplify this, because AI makes efficiency visible in a way that's uncomfortable for anyone whose perceived value is tied to effort rather than output. If a task used to take a day and now takes an hour, the person who previously "owned" that task has to reckon with what they are now, if the work is no longer the thing that defines them.
Some people will navigate that transition gracefully. Others will respond by making the hour-long task look like a day again, wrapping it in process, reviews, and elaborate documentation to preserve the shape of the old job.
Watch for excessive process. Watch for work that seems overdone. Watch for people who suddenly have a lot more to explain in their status updates. None of this is a character flaw. It's a signal. And it's worth addressing before it becomes the team's new normal, not by calling it out, but by making the underlying fear less scary. Which brings us to the work.
The leadership move: space and forward motion
The temptation when a transformation is happening is to pick one of two extremes. Mandate it from the top (everyone must use AI, here's the training, here's the deadline). Or leave it alone (smart people will figure it out, let's not over-engineer this).
Both approaches lose.
The mandate loses because it skips the social comparison work. People aren't looking for instructions. They're looking for how to feel about it, and a top-down directive doesn't answer that question. It just adds compliance stress on top of the uncertainty that was already there.
The hands-off approach loses because silence isn't neutral. When leaders don't create a forum, people comparison-shop with their peers, and the loudest voice (usually the most anxious one) becomes the default reference point. Inaction looks like "we don't know either," which reads as "we're not going to protect you through this."
The move in the middle is to give people space to talk about it and something real to build with.
Give them space to talk. Not a webinar. Not a Slack channel that turns into a graveyard after week two. Actual time, on the calendar, in the room, where the conversation is allowed to be messy. What are people seeing? What are they trying? What are they afraid of? What's working? What isn't? The fears don't go away because you don't mention them. They just go underground, where they're harder to address and easier to spread.
Start with your champions. Let them share what's working, not as a presentation, as a conversation. The second this turns into mandatory training, you've lost the thread. The point isn't to transfer a technique. The point is to shift the reference point from "no one knows what to do with this" to "people on our team are figuring it out, and it's okay to be one of them."
Give people a sandbox. A low-stakes space where experimenting is the point, not the exception. People learn by doing, and they learn faster when the doing isn't being graded. If the only time someone touches an AI tool is in a live client deliverable, they'll either avoid it or misuse it. Neither gets you where you want to go.
You can require usage. Don't require methods. Requiring AI adoption is a legitimate business decision, and sometimes it's the right one. But require the usage, not the specific way it's used. Let people find their own path inside their own work. Let them captain the ship, or at least navigate. The people closest to the work are the ones who'll figure out how AI actually fits into it, not the people two levels up drawing workflow diagrams for someone else's job.
This is the core of change adoption, and it hasn't changed in decades. People adopt what they help design. They resist what gets done to them. AI doesn't get an exemption from that rule just because it's new.
Why I use LEGO for this
The reason I've built my practice around LEGO Serious Play isn't because I think bricks are magic. It's because change adoption gets stuck in two specific places, and LSP unsticks both of them fast.
The first stuck point. People can't safely say what they're actually afraid of. The fear is real, but naming it out loud feels risky, especially in front of the people whose opinions matter most. So it stays inside. It leaks sideways. It shows up as resistance, or disengagement, or that indispensability play we talked about earlier.
The second stuck point. Even when people can name the fear, they can't see a path to what's next. They know the ground is moving. They don't know what to build on it.
LSP handles both, together, in a compressed time frame. The bricks give people something to point at instead of having to put their fears into words that feel risky to say out loud. You can build a model of what's scaring you. You can build a model of what you wish would happen. You can build a model of where your team is now and where it needs to be, and suddenly a conversation that would have taken weeks of 1:1s is happening in an afternoon, with everyone in the same room, building together.
Then, and this is the part that matters most, the same bricks become the material for building forward. The fears go on the table, the team sees them for what they are, and the work shifts from what are we worried about to what do we do next. You compress weeks of hallway conversations into a couple of hours of shared, visible, built-together work.
That's the whole game with AI adoption right now. The teams that move fastest aren't the ones with the best tools. They're the ones that create space for the real conversation and then give people something real to build with.
The question underneath
Your team is comparing. They're watching each other, watching other teams, watching you. They're trying to locate themselves in a landscape that's moving faster than anyone's ready for. Underneath most of it is the same question that's been the loudest one in every major technology shift. What will happen to my job?
You can't answer that question for them. Nobody can, honestly. Anyone telling you they know exactly what AI means for your industry in three years is selling something.
But you can do the thing that actually matters, which is notice what your team is asking each other, and give them somewhere to work it out together. The leaders who do well in the next couple of years won't be the ones with the slickest AI strategy deck. They'll be the ones whose teams felt steady enough to keep building while the ground was shifting.
That's not a tool problem. It's a room problem.
Make the room.
Related posts:

