Managers are using AI as a ‘sounding board’ for tough calls

Managers are using AI as a 'sounding board' for tough calls - Professional coverage

According to Fast Company, AI is rapidly evolving from handling rote tasks to acting as a “thinking partner” for bigger-picture managerial decisions. Stacy Spikes, CEO of MoviePass, explicitly compares AI platforms to a “chief of staff or a senior adviser,” using them as a second set of eyes. He employs the technology as a sounding board for delicate situations, like approaching vendors or handling tricky interpersonal dynamics, to gain a better understanding before he acts. Spikes emphasizes he doesn’t let AI make the final decision, but the tool has even made him “a little bit kinder” in his approach. This shift introduces new efficiency gains that companies are chasing, but it also brings fresh risks that organizations are just starting to navigate. The core tension lies in balancing speed and precision with the need for clear guardrails to prevent human judgment from slipping into autopilot.

Special Offer Banner

The human-AI copilot dynamic

Here’s the thing: what Spikes is describing isn’t automation. It’s augmentation. He’s not outsourcing a decision to a black box; he’s using AI to stress-test his own thinking, to see an issue from another angle before he walks into a room. That’s a fundamentally different use case than, say, an AI sorting resumes. It’s about preparation and perspective. And honestly, it sounds incredibly useful. How many bad meetings or strained negotiations have happened because we only had our own, emotionally charged viewpoint? Having a dispassionate, data-informed “advisor” to run things by could be a game-changer for leadership soft skills.

Where the guardrails need to be

But this is where it gets tricky. Spikes is a CEO with a clear, strong framework: AI is a tool, not the decider. That mindset is crucial. The risk for many organizations is that this “sounding board” function slowly morphs into a crutch. If the AI always suggests a slightly more diplomatic email, do employees eventually lose the ability to craft one themselves? If it models every difficult conversation, does our own empathy muscle atrophy? The article nails it by pointing to the danger of judgment going on “autopilot.” The guardrails aren’t just about preventing bias or errors in the AI’s output. They’re about preventing the erosion of human judgment and emotional intelligence. Companies diving in headfirst need to ask: are we training the AI, or is the AI training us?

The unseen stakeholder impact

So what does this mean for everyone else? For employees, this could be a great leveler. Junior managers might get access to senior-level strategic counsel instantly. For the companies selling these AI platforms, the market is shifting from productivity tools to executive coaching and strategy aids. That’s a huge shift. But let’s think about the people on the other side of these AI-assisted decisions—the vendors, the employees receiving feedback. Will interactions start to feel strangely uniform, like everyone is reading from the same AI-politeness script? There’s a potential uncanny valley in communication here. The real test will be whether AI makes managers not just *seem* kinder, but actually helps them develop deeper, more genuine understanding. That part probably still requires a human heart.

Leave a Reply

Your email address will not be published. Required fields are marked *