Why We're Getting AI Wrong in 2025: The Hidden Biases Clouding Our Judgment
As AI reshapes our world in 2025, we’re often wrong about its future due to hidden biases like overconfidence and anchoring. These mental shortcuts distort our predictions, leading to hype or fear. This report reveals how understanding them helps us adapt wisely—pairing tech with human insight.

Understanding human behavior has never been more important than it is today. As artificial intelligence reshapes our world at lightning speed, we're making critical errors in predicting its future impact. The problem isn't with the technology itself—it's with how our brains process information about complex, rapidly changing systems. This report explores why humans struggle to make accurate predictions about AI and how centuries-old cognitive biases are leading us astray in the digital age.
The Human Prediction Problem
Humans are notoriously bad at predicting the future, and research shows we're wrong about technology more often than we're right[1][2][3]. Studies reveal that when people claim to be 99% certain about something, they're actually wrong about 40% of the time[4]. This isn't a character flaw—it's how our brains are wired.
The issue becomes particularly pronounced with artificial intelligence because AI represents something fundamentally different from previous technologies. Unlike cars or smartphones, which we can physically understand, AI operates in ways that are largely invisible to us. This invisibility creates perfect conditions for our cognitive biases to run wild.
The AI Hype Cycle: Where Promise Meets Reality
The current AI landscape perfectly illustrates our prediction failures. In 2025, we're witnessing what experts call the peak of the AI hype cycle[5][6][7]. The technology has moved from "innovation breakthrough" through "early hype" and is now hitting "overinflated expectations."
Recent data shows a stark reality check: 42% of businesses are scrapping the majority of their AI initiatives, up from just 17% six months earlier[1]. Gartner forecasts a 30% abandonment rate for AI projects by the end of 2025[8]. Meanwhile, 69% of enterprise AI initiatives stall before being operationalized, and 46% of proof-of-concept projects are abandoned entirely[8].
This pattern isn't unique to AI—it's happened with every major technology from the internet to blockchain. But our brains don't learn from these historical patterns because they're clouded by specific cognitive biases.
The Cognitive Bias Trap
Availability Heuristic: Media Shapes Reality
The availability heuristic causes us to judge the probability of events based on how easily we can recall similar examples[9][10]. In the AI world, this means we overestimate both the promises and threats of AI based on whatever stories dominate the news cycle.
When media outlets constantly report on AI breakthroughs—from ChatGPT writing poetry to AI solving complex scientific problems—our brains interpret this as evidence that AI is advancing faster and more comprehensively than it actually is[11][12]. Conversely, when we see reports of AI failures, like chatbots giving dangerous medical advice or self-driving cars causing accidents, we may overestimate these risks[13].
The problem is that media coverage doesn't reflect statistical reality. Dramatic AI stories get clicks, while incremental progress or mundane limitations don't. This creates a distorted mental picture of what AI can and cannot do.
Anchoring Bias: Stuck on First Impressions
Anchoring bias causes us to rely heavily on the first piece of information we receive about a topic[14][15]. In AI discussions, this often means people anchor on either utopian or dystopian scenarios they encounter early in their AI education.
If someone's first exposure to AI is through science fiction movies depicting robot overlords, they may permanently anchor on existential risk scenarios, even when presented with more nuanced information. Alternatively, if they first encounter AI through impressive demonstrations of large language models, they might anchor on the belief that artificial general intelligence is just around the corner[16].
This anchoring effect explains why AI debates often feel like people are talking past each other—they're anchored on fundamentally different reference points.
Overconfidence Bias: The Dunning-Kruger Effect in AI
Overconfidence bias is particularly dangerous in the AI domain because AI systems themselves can appear to exhibit confidence even when they're wrong[4][17][18]. This creates a feedback loop where humans become overconfident about AI capabilities, leading to poor decision-making.
The Dunning-Kruger effect manifests in AI discussions when people with limited technical knowledge make bold predictions about AI timelines or capabilities[17][18]. They may overestimate their understanding of machine learning principles while underestimating the complexity of creating truly intelligent systems.
Interestingly, this bias works both ways. AI experts sometimes exhibit "curse of knowledge" bias, assuming that others understand AI's limitations as clearly as they do, leading to insufficient communication about realistic expectations[19].
Confirmation Bias: Seeing What We Want to See
Confirmation bias leads us to seek information that confirms our existing beliefs while ignoring contradictory evidence[20][21]. In AI discussions, this creates echo chambers where optimists only read about AI successes while pessimists focus solely on failures.
This bias is particularly problematic because AI is genuinely transformative in some areas while completely ineffective in others. Someone who wants to believe AI will solve climate change might focus on breakthrough materials discovery while ignoring AI's massive energy consumption. Conversely, AI skeptics might emphasize every chatbot mistake while dismissing genuine medical diagnosis improvements.
Sunk Cost Fallacy: Trapped by Past Investments
The sunk cost fallacy causes organizations to continue investing in failing AI projects because they've already spent significant resources[22][23][24]. This is particularly common in enterprise AI deployments where companies have invested millions in AI infrastructure that isn't delivering promised returns.
Rather than objectively evaluating whether to pivot or abandon ineffective AI initiatives, decision-makers often double down because admitting failure would mean acknowledging wasted investment. This leads to "zombie AI projects" that consume resources without delivering value.
Why Prediction Is Fundamentally Hard
Beyond cognitive biases, prediction about AI is inherently difficult for several reasons:
Exponential vs. Linear Thinking
Human brains evolved to think linearly, but AI development often follows exponential patterns[25][3]. We struggle to internalize what exponential growth really means, leading to both underestimation of near-term capabilities and overestimation of long-term timelines.
Complex System Interactions
AI systems don't exist in isolation—they interact with human behavior, economic systems, regulatory frameworks, and social structures in unpredictable ways[26][16]. These interactions create emergent properties that are nearly impossible to forecast.
The Black Box Problem
Most modern AI systems, particularly large language models, operate as "black boxes" where we can observe inputs and outputs but not the internal reasoning process[13][19]. This opacity makes it difficult to predict when and how these systems will fail or succeed.
Real-World Consequences
These prediction failures have serious real-world impacts:
Business Decisions: Companies make poor strategic decisions based on inflated AI expectations, leading to wasted resources and missed opportunities[1][13].
Policy Making: Governments struggle to create appropriate AI regulations because they can't accurately predict AI development trajectories[27][28].
Individual Choices: People make suboptimal personal and professional decisions based on unrealistic AI forecasts, from career choices to investment decisions[29][21].
Social Preparation: Society fails to adequately prepare for AI's actual impacts—both positive and negative—because public discourse is dominated by extreme scenarios rather than realistic assessments[26][30].
The Feedback Loop Problem
Perhaps most concerning is how AI systems themselves amplify human biases. When AI models are trained on human-generated data, they learn and potentially amplify our cognitive biases[27][26]. This creates a feedback loop where biased humans create biased AI, which then influences human thinking in increasingly biased ways.
Research shows that when humans repeatedly interact with biased AI systems, they become more biased themselves[26]. This suggests that our prediction problems about AI may actually get worse over time, not better.
Moving Forward: Building Better Prediction Skills
While we can't eliminate cognitive biases entirely, we can develop strategies to make better predictions about AI:
Diversify Information Sources: Actively seek out perspectives that challenge your existing beliefs about AI[30][31].
Focus on Base Rates: When making predictions, start with historical data about technology adoption rather than dramatic individual examples[32][33].
Embrace Uncertainty: Accept that the future of AI is genuinely uncertain and avoid overconfident predictions in either direction[34][35].
Learn from History: Study how previous technologies actually developed compared to early predictions to calibrate expectations[36][37].
Question Timing: Be especially skeptical of specific timeline predictions, as these are where humans perform worst[38][2].
The challenge isn't to predict AI's future perfectly—that's impossible. Instead, it's to recognize our limitations and make decisions that are robust across multiple possible futures. By understanding how our biases shape our AI predictions, we can begin to see more clearly through the hype and fear to make better choices about this transformative technology.
The future of AI remains genuinely uncertain, but one thing is clear: our biggest obstacle to navigating that future wisely isn't artificial intelligence—it's our own all-too-human psychology.
Navigating AI-Driven Changes Through Coaching
As AI continues to transform our work and daily lives—automating routine tasks, shifting job requirements, and introducing new ways of collaborating—many people feel overwhelmed by the pace of change. This is where coaching comes in as a practical tool to help individuals and teams adapt. At its core, coaching provides personalized guidance to build resilience, clarify goals, and develop new skills in uncertain times[39][40][41]. It helps people move from feeling stuck to taking purposeful action, making it especially valuable for handling AI's disruptions.
In general, coaching supports adaptation by focusing on key areas:
- Building Self-Awareness and Resilience: Coaches help you understand your reactions to change, like anxiety about job security or excitement about new opportunities. Through one-on-one sessions, you can develop a growth mindset, viewing AI as a tool for enhancement rather than a threat. This builds emotional strength to handle shifts, such as learning to work alongside AI systems[42][43][44].
- Improving Communication and Decision-Making: With AI changing team dynamics, coaching sharpens skills for clear communication and better choices. It guides leaders to align personal goals with organizational shifts, reducing resistance and fostering collaboration[45][46][39].
- Setting Actionable Goals: Coaches break down big challenges into manageable steps, like upskilling for AI-integrated roles or rethinking career paths. This structured approach turns uncertainty into progress, with real-time feedback to adjust as needed[47][48][49].
Particularly effective in this context is Wingwave coaching, a targeted method that combines rapid eye movement techniques (similar to what happens in dream sleep) with muscle feedback testing to quickly reduce stress and boost mental fitness[50][51][52]. Developed for high-performance settings like sports and business, Wingwave helps process emotional blocks from change—such as fear of obsolescence in an AI world—often in just a few sessions.
Here's how Wingwave stands out for AI adaptation:
- Quick Stress Reduction: It identifies and clears subconscious stress triggers related to AI changes, like worry about job automation, leading to clearer thinking and increased creativity[50][51][53].
- Enhanced Mental Agility: By improving emotional balance, Wingwave boosts your ability to handle conflicts and adapt to new work environments, such as hybrid human-AI teams. Athletes using it report better problem-solving and well-being, which translates well to professional settings[51][52].
- Practical Integration: Combined with broader coaching, Wingwave helps you build conflict resolution skills and mental stamina, making it easier to embrace AI tools while maintaining human strengths like empathy and innovation[50][53].
While AI itself is creating adaptive coaching tools for personalized guidance[54][55][56], human-led coaching like Wingwave adds irreplaceable elements—genuine relationships and nuanced judgment that machines can't fully replicate[57]. Whether through general coaching or specialized methods like Wingwave, these approaches empower you to not just survive AI changes, but thrive by turning challenges into opportunities for growth. If you're facing these shifts, starting with a coaching conversation could be your first step toward a more confident future.
Sources
[1] Why AI Fails: The Untold Truths Behind 2025's Biggest Tech Letdowns https://www.techfunnel.com/fintech/ft-latest/why-ai-fails-2025-lessons/
[2] Research: Executives Who Used Gen AI Made Worse Predictions https://hbr.org/2025/07/research-executives-who-used-gen-ai-made-worse-predictions
[3] Why are humans so bad at predicting the future? - Quartz https://qz.com/1752106/why-are-humans-so-bad-at-predicting-the-future
[4] How Overconfidence Bias Sabotages Decision-Making - ATR Soft https://www.atrsoft.com/noni/blog/how-overconfidence-bias-sabotages-decision-making/
[5] Long Live the Metaverse: Identifying the Potential for Market Disruption and Future Research https://www.tandfonline.com/doi/full/10.1080/07421222.2025.2455770
[6] Promise versus Reality: Trends in Big Data and AI 2025 https://www.internationalaffairs.org.au/australianoutlook/promise-versus-reality-trends-in-big-data-and-ai-2025/
[7] Where Are We in the AI Cycle? From Hype to Reality: Mapping AI's ... https://aiworldjournal.com/where-are-we-in-the-ai-cycle-from-hype-to-reality-mapping-ais-next-turning-point/
[8] Strategic decision-making framework for evaluating and selecting GenAI use cases https://journalijsra.com/node/1464
[9] Availability heuristic - Wikipedia https://en.wikipedia.org/wiki/Availability_heuristic
[10] Availability Heuristic: Examples and Effects on Decisions https://www.verywellmind.com/availability-heuristic-2794824
[11] The Influence of Availability Heuristic on Shaping Decisions and Economic Outcomes https://www.ewadirect.com/proceedings/aemps/article/view/22634
[12] The Impact of the Availability Heuristic on Decision-Making and Risk Perception https://www.deanfrancispress.com/index.php/hc/article/view/2210
[13] AI Fail: 4 Root Causes & Real-life Examples in 2025 https://research.aimultiple.com/ai-fail/
[14] Anchoring effect - Wikipedia https://en.wikipedia.org/wiki/Anchoring_effect
[15] What Is Anchoring Bias? | Definition & Examples - Scribbr https://www.scribbr.com/research-bias/anchoring-bias/
[16] Towards an Interactionist Perspective on Human Cognitive Bias and ... https://arxiv.org/html/2504.18759v1
[17] The Dunning-Kruger Effect: Understanding Its Impact on Generative AI https://lanshore.com/knowledge_articles/the-dunning-kruger-effect-understanding-its-impact-on-generative-ai/
[18] The Dunning-Kruger Effect Within the AI Domain - LinkedIn https://www.linkedin.com/pulse/dunning-kruger-effect-within-ai-domain-neven-dujmovic-qr9mf
[19] How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review https://dl.acm.org/doi/10.1145/3514094.3534164
[20] Navigating AI adoption: Top 5 biases related to AI-human interactions https://www.linkedin.com/pulse/navigating-ai-adoption-top-5-biases-related-ai-human-feivf
[21] Understanding Human Behavior in Finance: A Qualitative Study on Cognitive Biases and Decision-making in Investment Practices https://goldenratio.id/index.php/grfm/article/view/462
[22] Avoiding the sunk cost fallacy in App modernization - Upsun https://upsun.com/blog/avoid-the-sunk-cost-fallacy/
[23] The Sunk Cost Fallacy and Technological Progress - LinkedIn https://www.linkedin.com/posts/charlesdavidallen_the-sunk-cost-fallacy-and-technological-progress-activity-7183567156779913216-zb0T
[24] Understanding the Sunk Cost Fallacy in Digital Transformation https://www.automation.com/en-us/articles/october-2023/sunk-cost-fallacy-digital-transformation
[25] Spatiotemporal Transformer for Stock Movement Prediction https://arxiv.org/abs/2305.03835
[26] How human–AI feedback loops alter human perceptual, emotional ... https://www.nature.com/articles/s41562-024-02077-2
[27] Human cognitive biases present in Artificial Intelligence https://www.eusko-ikaskuntza.eus/en/riev/human-cognitive-biases-present-in-artificial-intelligence/rart-24782/
[28] Why This CEO Believes AI Failures Will Dominate The Headlines In ... https://www.forbes.com/sites/kolawolesamueladebayo/2025/04/03/why-this-ceo-believes-ai-failures-will-dominate-the-headlines-in-2025/
[29] How Cognitive Biases Impact AI Adoption: What Every Business ... https://hackernoon.com/how-cognitive-biases-impact-ai-adoption-what-every-business-leader-should-know
[30] Cognitive bias and how to improve sustainable decision making - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC10071311/
[31] The 5 Biggest Biases That Affect Decision-Making https://neuroleadership.com/your-brain-at-work/seeds-model-biases-affect-decision-making/
[32] 10 tips to eliminate forecast bias - Finance Alliance https://www.financealliance.io/10-tips-to-eliminate-forecast-bias/
[33] Evaluating the forecast accuracy and bias of alternative population ... https://pubmed.ncbi.nlm.nih.gov/12157868/
[34] We are bad at predicting the future. Ignore AI hype and fear - Big Think https://bigthink.com/thinking/bad-predicting-future-ai-hype-fear/
[35] Why are we so bad at predicting the future? - Neil Pasricha https://www.neil.blog/articles/why-are-we-so-bad-at-predicting-the-future
[36] What metrics should we use to measure commercial AI? https://dl.acm.org/doi/10.1145/3340470.3340479
[37] What is AI Winter? Definition, History and Timeline - TechTarget https://www.techtarget.com/searchenterpriseai/definition/AI-winter
[38] Where AI predictions go wrong - Vox https://www.vox.com/future-perfect/354157/ai-predictions-chatgpt-google-future
[39] Navigating Organizational Change: The Critical Role of ... https://www.coachhub.com/blog/navigating-organizational-change
[40] How Coaching Can Help Employees Navigate Change https://www.epiphanycoaches.com/post/how-coaching-can-help-employees-navigate-change
[41] Coaching for Change Management and Leadership https://www.innovationtraining.org/coaching-and-change-management/
[42] 5 Coaching Skills Every Leader Needs During Times Of ... https://peoplemanagingpeople.com/learning-development/coaching-skills-change/
[43] How Coaching Empowers Leaders in Navigating Major ... https://www.loebleadership.com/insights/coaching-empowers-leaders-navigating-major-changes
[44] How Does Executive Coaching Help Leaders Navigate ... https://www.the-leadership-coaches.com/post/how-does-executive-coaching-help-leaders-navigate-change
[45] Team coaching helps a leadership team drive cultural change at Caterpillar https://onlinelibrary.wiley.com/doi/10.1002/joe.20212
[46] Guiding Through Turbulent Times: Coaching During Merger and Acquisition https://www.tandfonline.com/doi/full/10.1080/14697017.2024.2361415
[47] Exploring the Role of Common Model of Cognition in Designing Adaptive Coaching Interactions for Health Behavior Change https://dl.acm.org/doi/10.1145/3375790
[48] Coaching entrepreneurs towards growth: an experimental design study of coaching effectiveness for business leaders’ psychological capabilities https://www.tandfonline.com/doi/full/10.1080/17521882.2024.2348449
[49] Coaching to Navigate Rapid Change https://www.executivecoachcollege.com/research-and-publications/coaching-to-navigate-rapid-change.php
[50] What is wingwave? https://wingwave.com/en/about-wingwave/what-is-wingwave
[51] Effects of wingwave® on athletes' wellbeing and fluidity of gaze ... https://pmc.ncbi.nlm.nih.gov/articles/PMC9995426/
[52] Wingwave Coaching - WhitneyBreer.com https://whitneybreer.com/en/organizational-development/coaching/wingwave-coaching/
[53] Wingwave Coaching - Janina Malyon Counselling https://www.maidstonecounselling.com/wingwave.html
[54] AI Coaches That Adapt to Coaching Style Preferences - Insight7 https://insight7.io/ai-coaches-that-adapt-to-coaching-style-preferences/
[55] The Future of Coaching: How Generative AI is Transforming ... https://www.odgers.com/en-au/insights/the-future-of-coaching-how-generative-ai-is-transforming-systemic-and-executive-coaching/
[56] AI in Coaching: Ethics, Innovation & Best Practices https://coachingfederation.org/resource/icf-artificial-intelligence-coaching-standards-a-practical-guide-to-integrating-ai-and-coaching/
[57] Why AI can't replace the human touch in coaching https://www.brookes.ac.uk/about-brookes/news/news-from-2024/12/why-ai-can-t-replace-the-human-touch-in-coaching-i
[58] Creativity and Entrepreneurship in an Era of Continuous Change: How Psychodynamic Coaching helps Teams and their Leaders to recover and move forward after Injury and loss https://phsreda.com/article/137995/discussion_platform
[59] Coaching as a route to voice: A framework for change https://explore.bps.org.uk/lookup/doi/10.53841/bpstcp.2024.20.1.50
[60] Can an online coaching programme facilitate behavioural change in women working in STEM fields? https://explore.bps.org.uk/lookup/doi/10.53841/bpsicpr.2020.15.1.20
[61] Leading with Purpose: Navigating Change, Cultivating Collaboration, and Prioritizing Well-being in Today's Workplace https://www.innovativehumancapital.com/leading-with-purpose
[62] Exploring the role of Dynamic Presencing in a group coaching training context for fostering transformative leadership development in disruptive times https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1352828/full
[63] A closer look into the ‘black box’ of coaching – linguistic research into the local effectiveness of coaching with the help of conversation analysis https://www.tandfonline.com/doi/full/10.1080/17521882.2024.2312295
[64] Coaching as a Buffer for Organisational Change https://www.frontiersin.org/articles/10.3389/fpsyg.2022.841804/pdf
[65] Leadership effectiveness through coaching: Authentic and change-oriented leadership https://pmc.ncbi.nlm.nih.gov/articles/PMC10699640/
[66] Coaching as a Buffer for Organisational Change https://pmc.ncbi.nlm.nih.gov/articles/PMC9193280/
[67] Editorial: Identity work in coaching: new developments and perspectives for business and leadership coaches and practitioners https://pmc.ncbi.nlm.nih.gov/articles/PMC11683902/
[68] The MAP (Me-As-a-Process) coaching model: a framework for coaching women’s identity work in voluntary career transitions https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1364134/pdf
[69] The MAP (Me-As-a-Process) coaching model: a framework for coaching women’s identity work in voluntary career transitions https://pmc.ncbi.nlm.nih.gov/articles/PMC11172146/
[70] Coaching to vision versus coaching to improvement needs: a preliminary investigation on the differential impacts of fostering positive and negative emotion during real time executive coaching sessions https://pmc.ncbi.nlm.nih.gov/articles/PMC4408757/
[71] Organizational Change Management in Higher Education through the Lens of Executive Coaches https://www.mdpi.com/2227-7102/11/6/269/pdf
[72] Coaching to vision versus coaching to improvement needs: a preliminary investigation on the differential impacts of fostering positive and negative emotion during real time executive coaching sessions https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00455/pdf
[73] Vision-based coaching: optimizing resources for leader development https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00412/pdf
[74] Coaching vs Mentoring: How to Navigate Change https://www.linkedin.com/advice/3/how-can-you-use-coaching-mentoring-navigate-change-uysef
[75] How a Business Coach Helps You Lead Through Change https://deliberatedirections.com/lead-through-change-business-coaching/
[76] How does the wingwave coaching "Work-Health Balance" work? https://wingwave.com/en/self-coaching-trainings/how-does-the-wingwave-coaching-work-health-balance-work
[77] Using Artificial Intelligence to Enhance Coaching – A new study https://www.oxfordleadership.com/using-artificial-intelligence-to-enhance-coaching-a-new-study/
[78] Team Leader Coaching: Navigating Change Effectively https://www.centreforteams.com/blog/team-leader-coaching-navigating-change