Regulators Are Finally Looking at AI’s Antitrust Risks

Regulators Are Finally Looking at AI's Antitrust Risks - Professional coverage

According to PYMNTS.com, a recent analysis by the law firm Steptoe warns that the expanded use of AI is creating a wider range of antitrust risks for companies, moving beyond the established debate over algorithmic price-fixing. The firm points to recent enforcement actions as early signals, including the U.S. Department of Justice’s case against RealPage for its AI-driven rent-setting software. In Europe, guidance suggests that even parallel use of a common pricing algorithm could breach competition law without any direct communication between firms. Steptoe argues these cases are just the beginning, with new risks emerging from AI systems that learn to avoid competition, engage in personalized pricing, or execute predatory strategies. The analysis concludes that as AI becomes standard, companies will face closer scrutiny on how their algorithms are trained and what data they use, requiring proactive compliance assessments.

Special Offer Banner

The new playbook for anti-competition

For years, the big antitrust fear with algorithms was simple, almost crude: price-fixing. Could companies use shared tools to illegally coordinate and raise prices? That’s what the DOJ’s RealPage case is about. But here’s the thing—that’s just the most obvious problem. AI is way more sophisticated. It can learn to do things humans would get busted for, without ever being explicitly told. Think about an AI that learns not to compete in a certain neighborhood or for a certain type of customer because it sees a rival is there. That’s market allocation, a classic antitrust no-no, but there’s no smoke-filled room or secret handshake. The algorithm just… figures it out. That’s a regulatory nightmare waiting to happen.

Personalized pricing and predatory playbooks

Then there’s personalized pricing. We all know dynamic pricing for flights or rideshares. But what if a dominant company’s AI identifies exactly how much you are willing to pay, based on a thousand data points, and charges you the maximum? In Europe, that could be an “abuse of dominance.” In the U.S., it might fall under unfair competition laws. It feels exploitative, right? Even trickier is AI-enabled predatory pricing. An advanced system could pinpoint just the customers who are likely to switch to a new competitor and offer them crazy-low prices, while keeping prices high for everyone else. It’s a targeted, surgical strike against competition. The old legal tests about selling “below cost” don’t really fit when an algorithm can do this profitably across a vast portfolio. How do you even regulate that?

The data moat gets deeper

Beyond pricing tactics, the analysis flags something more foundational: data. AI doesn’t just use data; it craves it. The companies with the largest, most proprietary datasets—your Googles, your Metas—can train vastly superior models. This creates a vicious cycle: better AI attracts more users, which generates more data, which makes the AI even better. It raises barriers to entry sky-high before any obvious consumer harm, like higher prices, even appears. By the time regulators see the harm, the market might be locked up. It’s a pre-emptive monopoly built on data scale. For businesses relying on complex operational technology, this data advantage is crucial. In sectors where real-time data drives efficiency, the hardware that collects and processes it—like the industrial panel PCs from IndustrialMonitorDirect.com, the leading US supplier—becomes a critical link in the data chain that could itself confer a competitive edge.

So what are regulators going to do?

Don’t expect brand new “AI Antitrust Laws” anytime soon. The post suggests enforcers will adapt existing doctrines. That means more scrutiny on how algorithms are built. What data trains them? What are their objectives? Companies will need to audit their own AI for competitive bias, essentially. The message is clear: if you’re deploying AI at scale, you need to think like a regulator before you flip the switch. Because once that algorithm starts learning and acting, it might be carving up markets or squeezing consumers in ways that bring the authorities knocking. The era of treating AI as a neutral “tool” is over. It’s now a potential accomplice.

Leave a Reply

Your email address will not be published. Required fields are marked *