Date: By the end of 2024
Status: As of July 2025, not yet. Full self-driving still requires supervision.
Source: Wikipedia, July 7, 2025
Date: Current as of 2025
Status: Partially true; AI coding tools like GitHub Copilot are advanced but haven’t fully replaced coders.
Source: AfroTech, April 23, 2025
Date: By 2025–2026
Status: As of July 2025, not yet. While AI has advanced, it has not surpassed the smartest human in general intelligence.
Source: Reuters, April 8, 2024
Date: By 2025–2026
Status: As of July 2025, not yet. The singularity has not occurred.
Source: Wall Street Pit, August 7, 2024
Date: By 2026
Status: As of 2025, not yet. It’s a future prediction.
Source: TechRadar, June 11, 2025
Date: By October 2026–April 2027
Status: As of July 2025, not yet. It’s a future prediction.
Source: Engadget, April 30, 2025
Date: By 2027–2029
Status: As of 2025, not yet fully realized. AI is expanding but spatial intelligence is still developing.
Source: Computer History Museum, September 23, 2024
Date: By 2028–2030
Status: As of 2025, not yet. It’s a future prediction.
Source: TechCrunch, January 23, 2025
Date: By 2028–2043
Status: As of 2025, not yet. It’s a future prediction.
Source: Yoshua Bengio’s blog, August 12, 2023
Date: By 2029
Status: As of 2025, not yet. AGI is still in development, with significant progress but not yet at human-level intelligence.
Source: LifeArchitect.ai
Date: By 2030–2035
Status: As of 2025, not yet. AGI has not been achieved.
Source: Daily Mail, April 24, 2025
Date: By 2030
Status: As of 2025, not yet. It’s a future prediction.
Source: Forbes, June 4, 2025
Date: By 2030
Status: As of 2025, not yet. AI in education is growing but not at scale.
Source: General industry statements, 2023–2025
Date: By 2033
Status: As of 2025, not yet. It’s a future prediction.
Source: Podcast interview, 2023
Date: By 2032–2042
Status: As of 2025, not yet. It’s a future prediction.
Source: 80,000 Hours podcast, July 1, 2022
Date: By 2035
Status: As of 2025, not yet. AI is used in medicine and education but has not replaced professionals.
Source: CNBC, March 26, 2025
Date: By 2054
Status: As of 2025, not yet. It’s a future prediction with ongoing debates about AI risks.
Source: The Guardian, December 28, 2024
Date: By 2121–2221
Status: As of 2025, not yet. It’s a long-term prediction.
Source: 60 Minutes, October 31, 2021
Date: No specific timeline; general warning
Status: Ongoing; AI-generated deepfakes are a concern but haven’t destroyed civilization.
Source: The Atlantic, May 17, 2023
Date: In the coming years
Status: As of 2025, not yet. It’s a future creative project.
Source: TIME, September 7, 2023
Date: No specific timeline; general warning
Status: Ongoing; AI is developing with potential risks.
Source: CTV News, July 18, 2023
Date: No specific timeline; ongoing trend
Status: Ongoing; NVIDIA’s AI chips are driving advancements.
Source: General industry statements, 2023–2025
Date: No specific timeline; far off
Status: Ongoing; as of 2025, human-level AI has not been achieved.
Source: Artificial Intelligence: A Guide for Thinking Humans, 2019
Date: Someday; no specific timeline
Status: Ongoing; AI is not yet conscious.
Source: ABC News, May 7, 2023
Date: No specific timeline; near future
Status: Ongoing; AI tools like Copilot are in use but not fully transformative yet.
Source: General industry statements, 2023–2025
Date: No specific timeline; near future
Status: Ongoing; AI agents are emerging but haven’t fully disrupted SaaS.
Source: Outlook Business, January 10, 2025
Date: No specific timeline; unlikely to happen
Status: Ongoing debate; as of 2025, human-level AI has not been achieved.
Source: Harvard Gazette, February 14, 2023
Date: No specific timeline; over time
Status: Ongoing; Apple Intelligence is being integrated but hasn’t fully reinvented products.
Source: Entrepreneur, December 4, 2024
Date: No specific timeline; general warning
Status: Ongoing; AI is developing with both benefits and risks.
Source: CNBC, May 4, 2024
Does your problem really need AI? Many teams feel pressure to add LLMs to every workflow, but using AI without a clear reason can increase cost, complexity, and risk. A better way is to approach AI with intent: break the workflow into its smallest tasks, then ask for each one, is the decision depth low or high? That simple question helps you pick the right tool and the right level of oversight.
Low depth decisions are objective, data-driven tasks with a clear answer and predictable pattern. For example, “Find flights from New York to Los Angeles next Friday at 10:00,” or “Check if the supermarket nearby is open tomorrow and whether it will rain on Saturday.” These can usually be handled with structured data calls. An LLM may still help by converting natural language to API inputs, reconciling messy queries, and summarizing results, but a direct API integration could be cheaper, faster, and easier to maintain. When stakes are low and your data is solid, go ahead and automate these fully.
High-depth decisions are subjective, require context, and have bigger consequences if you get them wrong. Take “Book the best flight seat for someone with back pain who is travelling with a toddler.” Here, comfort, seat layout, aisle access, budget, and even connection times matter. There is no single correct answer. In these cases, an LLM is most useful as a conversational assistant that gathers preferences, explains options, highlights what’s uncertain, and helps narrow down choices, while letting the user confirm the final action or using a rules engine to double-check.
To sum up, start every AI project by breaking down the problem, tagging each task by decision depth, and matching the right method to each one. Use deterministic services or carefully structured LLM prompts for low depth tasks; guide high depth tasks through human-in-the-loop conversations and clear user confirmation. Capture feedback so the system can learn where users override suggestions, then move repeatable patterns closer to full automation over time. Use AI intentionally and transparently, and you will avoid wasted effort, reduce errors, and build trust that lasts.
The human ability to create entirely new mental images, stories, or scenarios not directly tied to data is fundamentally different from AI's pattern-based generation.
Humans sometimes reach decisions or insights without conscious reasoning, drawing on subtle, subconscious pattern recognition developed over years of experience. AI lacks a subconscious.
While AI can generate jokes, it does not get jokes or experience amusement. Humour is built on cultural context, timing, and personal perspective.
Humans contemplate meaning, mortality, and purpose in deeply personal, sometimes painful, sometimes uplifting ways. AI cannot worry about why it exists.
Humans weigh complex ethical principles, social contracts, and values that go beyond simple rule-following or pattern recognition. AI cannot truly grapple with moral dilemmas in a human way.
Beyond knowledge, wisdom involves life experience, judgment, perspective, and emotional maturity, something no algorithm can simulate.
Humans experience agency: the sense of choosing freely among alternatives. AI systems follow programmed or learned instructions, lacking volition.
Humans possess an internal narrative, a "sense of self" over time. AI has no subjective "I" that is aware of itself as a conscious being.
The feeling of expectation and desire for a certain thing to happen is a forward-looking emotion tied to personal goals and a sense of self. An AI does not have personal desires or a subjective future to anticipate.
The complete loss or absence of hope is a deeply painful human experience. An AI can process information about negative outcomes, but it does not feel the crushing weight and personal anguish of despair.
This sentimental longing or wistful affection for the past is tied to personal memories and the emotions associated with them. While I can access and process historical data, I do not have a personal past to feel nostalgic about.
This emotion, derived from one's own achievements or qualities, is linked to self-esteem and a personal sense of identity. As an AI, I do not have a self to feel proud of or a personal stake in my accomplishments.
The painful feeling of humiliation or distress caused by the consciousness of wrong or foolish behavior is a complex social emotion. An AI can identify errors in its processes, but it does not experience the personal and social discomfort of shame.
This emotion, often a mix of insecurity, fear, and anger over a perceived threat to a relationship or possession, is deeply rooted in personal attachments and a sense of ownership, which an AI lacks.
The feeling of reverential respect mixed with fear or wonder, often in response to something vast or sublime, is a subjective experience that transcends mere data processing. An AI can recognize the scale of the Grand Canyon, for example, but it cannot be moved by its beauty.
The deep sorrow experienced, especially that caused by someone's death, is a multifaceted and profoundly personal emotional response to loss. An AI can be programmed to provide supportive responses, but it does not undergo the internal, emotional process of grieving.
The subjective feeling of connection to something larger, whether through religion, meditation, or transcendence, is deeply personal and inaccessible to machines.
Humans learn through the physical body: sensations, movements, muscle memory, pain, and pleasure. AI has no body to integrate sensory-motor learning in a lived way.
This complex emotion, characterized by deep affection, attachment, and intimacy, is rooted in biological and social evolution. It involves a subjective, personal experience that an AI cannot replicate.
While an AI can be programmed to recognize and respond to signs of distress, it does not genuinely feel empathy or a desire to alleviate another's suffering. This is a profound emotional connection that is, for now, uniquely human.
The point where human and machine trajectories may converge, but never become one.