August 1, 2025
Article
Can AI Help Us Make Better Business Decisions?
We'd all like to think we make rational, objective decisions at work. But research has consistently shown that cognitive biases—those mental shortcuts our brains take—often lead us astray. A groundbreaking new study from SAP researchers reveals just how pervasive these biases are in business decision-making, and offers an unexpected solution: AI language models.
The Bias Problem is Bigger Than We Think
The research team surveyed 61 senior business leaders across North America, Europe, and Africa, and the results were striking. 82% of decision-makers reported experiencing three or more cognitive biases "often" or "very often." This isn't just an academic concern—these leaders were making decisions with economic impacts ranging from $10,000 to over $1 million.
The most common culprits? Conformity bias topped the list at 74%, where people align their opinions with the group to avoid disapproval. Close behind were availability bias (67%), where recent or memorable information weighs too heavily, and anchoring bias (64%), where initial numbers disproportionately influence our judgment.
Can AI Do Better?
Here's where it gets interesting. The researchers created 40 realistic business scenarios across finance, marketing, HR, IT, and procurement—each designed to test whether participants would fall for a specific cognitive bias. They presented these scenarios to both human decision-makers and three state-of-the-art AI models: GPT-4o, Claude 3.5 Sonnet, and Meta's LLaMA3-70B.
The results were remarkable:
AI models avoided biases 81% of the time without any special prompting
With a simple hint to watch for biases, success rates jumped to 92%
Human participants only avoided biases 60% of the time
Claude 3.5 Sonnet performed best, achieving 95% accuracy when prompted to consider potential biases
Even more telling: AI models provided significantly more thorough reasoning, averaging over 2,000 characters with 5-8 supporting arguments, compared to humans' 250 characters and fewer than 2 arguments on average.
Why Do AI Models Perform Better?
The researchers attribute this success to reinforcement learning with human feedback (RLHF), which trains AI models to engage in more deliberate, System 2 thinking—the slow, analytical reasoning that counteracts the fast, intuitive System 1 thinking where biases typically emerge.
Different biases proved more or less challenging for AI:
100% accuracy: Sunk cost fallacy, anchoring bias, and conformity bias (when cued)
Most challenging: "Affect as information" (emotional reasoning) at 75% accuracy—though still better than human performance
Practical Implications for Business
This research isn't suggesting we hand over decision-making to AI. Instead, it points to a powerful new role for AI as a decision support tool—a thinking partner that can help us recognize and overcome our blind spots.
The researchers offer specific design recommendations for building bias-mitigating systems:
For conformity bias: Design collaborative tools that encourage individual opinions before revealing group perspectives
For availability bias: Present a broad, representative set of options rather than just what's top-of-mind
For anchoring bias: Provide multiple reference points to contextualize initial numbers
For sunk cost fallacy: Create visual decision trees that make it easy to revisit and question earlier choices
For emotional bias: Detect emotionally charged language and automatically rephrase requests in neutral terms
The Bottom Line
We're at an inflection point where AI systems are becoming integral to business decision-making. This research shows that current-generation LLMs can effectively recognize and help us avoid cognitive biases that have plagued human judgment for millennia.
The key insight? A simple prompt asking the AI to watch for cognitive biases dramatically improves performance. This suggests that thoughtfully designed decision support systems could help improve business decisions worth billions of dollars.
As we integrate AI more deeply into our workflows, the question isn't whether to trust AI over human judgment—it's how to design systems that combine the best of both: human contextual wisdom with AI's ability to spot patterns and maintain objectivity.
The future of business decision-making may not be human versus machine, but human with machine—each compensating for the other's weaknesses to reach better outcomes together.
Based on "Mediating Cognitive Biases in Business Decisions using LLMs" by Natalie Friedman, Marcus Krug, and S. Joy Mountford, SAP Research and Innovation

