2024 will be remembered as the year artificial intelligence (AI) transcended its hype to showcase its multiple practical applications.
From Apple’s groundbreaking entry into generative AI to quantum computing breakthroughs, from specialized AI agents to landmark regulations, the year marked the transition from theoretical possibilities to practical realities that changed how we work, create, and solve problems.
However, no achievements are made without overcoming some obstacles. So, in this article, we explore the key AI breakthroughs in 2024 that left their mark on the history of AI development and the controversies they aroused.
Key Takeaways
- AI architecture saw dramatic advancement, with specialized agents replacing generic models.
- Hardware capabilities expanded exponentially through quantum computing and new chip designs.
- The EU AI Act established the first comprehensive AI governance.
- Search technology underwent fundamental transformation through AI-powered innovations.
- Privacy-first approaches gained prominence, setting new standards for responsible AI development.
- Show Full Guide
Top 10 AI Developments of 2024
1. Apple Intelligence Launch
Apple’s entry into generative AI with Apple Intelligence was highly anticipated in 2024. While competitors rushed to market, Apple took its time crafting a privacy-focused framework that seamlessly integrated across devices. From enhanced Siri capabilities to Genmoji creation, Apple showed us how AI could enhance user experience while keeping data protection at its core.
However, the launch quickly revealed the challenges of balancing innovation with reliability.
Within weeks, Apple Intelligence generated several high-profile misinformation incidents. These errors, wrongly attributed to news sources like BBC News, prompted Reporters Without Borders to call for the feature’s removal. They stated:
“This accident illustrates that generative AI services are still too immature to produce reliable information for the public and should not be allowed on the market for such uses.”
Major news organizations like the BBC then formally complained about false attributions threatening their credibility, which amplified the controversy.
Meanwhile, several technical challenges emerged. For example, many iPhone users couldn’t access the promised features. Apple also struggled to implement the system in China under local regulations, and the company’s characteristic silence on these issues only intensified the debate about AI’s readiness for mainstream news summarization.
2. Nvidia’s Blackwell Chip
If 2024 had a hardware hero, it was Nvidia’s Blackwell chip. It was a complete reimagining of what’s possible in AI processing architecture.
The numbers tell quite a story:
- Blackwell operates with 208 billion transistors
- Delivers up to 2.5 petaFLOPS of performance
- Networks over 100,000 chips together
But what really matters isn’t the specs – it’s what they enable. The chip’s ability to handle data center-scale generative AI workflows while consuming 25x less energy than its predecessor represents a quantum leap in efficiency.
However, this breakthrough didn’t come without drama.
Early production challenges led to what Nvidia’s CEO Jensen Huang candidly admitted was “100% Nvidia’s fault,” causing initial delays.
Overheating issues in server racks and supply constraints also left demand “well above supply,” showing what happens when you have new technology hit real-world implementation challenges.
Yet despite these hurdles, Blackwell’s impact rippled throughout the industry. Major tech companies like Microsoft (MSFT) and Meta (META) rushed to secure their share of these chips.
At the same time, data centers began retooling their infrastructure to accommodate the new technology’s cooling requirements. Due to the chip’s success, Nvidia’s market capitalization rocketed to over $3 trillion, cementing its position among tech’s elite.
3. Claude 3.5 Sonnet Development
When Anthropic launched Claude 3.5 Sonnet, we got a new model that was not just faster or more accurate but fundamentally more thoughtful in how it approached problems. Reasoning through them with sophistication made previous models feel primitive in comparison.
Claude 3.5 Sonnet cracked 64% of complex coding challenges in internal evaluations, leaving its predecessor’s 38% success rate in the dust.
But raw performance wasn’t the whole story.
Anthropic also released its Artifacts feature, which transformed how we collaborate with AI. It enabled real-time document generation and updates that felt more like working with a skilled colleague than a traditional chatbot.
Another big release was the experimental computer use capability, which allowed Claude to actually control desktop environments.
All of this showed us a future where AI could handle complex, multi-step tasks across applications.
In his essay Machines of Loving Grace, Anthropic CEO Dario Amodei predicts that AI could deliver 50-100 years of biological progress within 5-10 years.
According to Amodei, “powerful AI,” by which he means AGI, will not only be equal to the human intellect but will be “smarter than a Nobel prize winner across most relevant fields,” including “biology, programming, math, engineering, writing, etc.”
4. EU AI Act Implementation
The EU AI Act implementation in 2024 marked a watershed moment – the world’s first comprehensive attempt to regulate AI development and deployment.
Think of it as GDPR for artificial intelligence but with broader implications for building and deploying AI systems.
The Act’s risk-based classification system fundamentally changed how we look at AI usage. Since AI systems were now classified into unacceptable, high-risk, limited-risk, or minimal-risk categories, companies were forced to rethink their entire approach to AI development. Suddenly, features that were once rushed to market required rigorous conformity assessments and documentation.
James White, chief technology officer at Calypso AI, told Techopedia:
“The impact of organizations operating in the EU will likely depend on how the company is using an AI model and what risk category that use case falls under, as identified by the Act.
“The categories — Prohibited, High Risk, and Low or No Risk — are described rather than defined and remain a bit fuzzy for cases on the edge. But this hierarchy is the core of the Act and dictates the level of low regulatory scrutiny that will be applied and the compliance requirements that must be met.”
However, this regulatory framework also caused some controversy regarding potential loopholes in Article 6(3). One specific loophole could allow developers to potentially exempt themselves from high-risk AI obligations.
The Act also had broad exemptions for national security, which raised alarms about potential government overreach. And its impact on migrants and vulnerable populations sparked a heated debate about digital rights and surveillance.
The industry’s adaptation challenges were significant, with penalties ranging from €7.5 million to €35 million (or up to 7% of global turnover). This meant companies had to begin to align their AI developments with the new requirements.
The regulation’s influence extended far beyond Europe’s borders, effectively setting global standards for AI development and forcing companies worldwide to reconsider their AI strategies.
5. OpenAI’s o1 Model
When OpenAI launched the o1 model in September 2024, it introduced a fundamentally new approach to AI reasoning. While previous models focused on quick responses, o1 introduced a “chain of thought” approach that allowed it to think through problems step-by-step before providing answers.
The model showed particular prowess in science, programming, and mathematics, performing at a PhD student level on challenging tasks.
Through the model, OpenAI also introduced a “reasoning_effort” API parameter. This parameter grants users more control over the model’s thinking time while using 60% fewer reasoning tokens than its preview version.
This approach also came with some significant trade-offs, such as the extended reasoning processes causing the model to struggle with creative tasks.
It also showed slower response times. More concerning were the model’s still high rates of hallucinations, and debates emerged about the potential for such advanced reasoning capabilities to lead to unintended consequences or misaligned goals.
6. Google’s Gemini 2.0
Gemini 2.0 is Google’s bold reimagining of how AI should process our world. Powered by the massive Trillium infrastructure (a network of 100,000+ specialized chips), it introduced a unified approach to processing text, images, audio, and video that made previous multimodal attempts look primitive.
The technical architecture truly broke new ground. Rather than treating different data types as separate streams, Gemini 2.0 processed everything simultaneously through a unified embedding space. With native image and audio generation capabilities, it sets new standards for AI comprehension.
Yet Google’s most interesting move wasn’t just technical – it was strategic. Introducing specialized agents like Jules for code development and Project Mariner for web navigation signaled a shift away from one-size-fits-all AI.
Google bet on specialized excellence while its competitors raced to build bigger models.
7. The Rise of Perplexity AI
Sometimes, the most significant breakthroughs come from unexpected places. Perplexity AI witnessed incredible growth from a $520 million startup to a $9 billion powerhouse. By reimagining how we interact with information, they fundamentally changed user expectations for search.
The growth numbers are impressive:
- From 4 million monthly users in late 2023 to 15 million by early 2024
- From 2.5 million daily queries to 20 million
This success also brought along intense debate about content rights and attribution, which are becoming increasingly important with today’s AI systems.
Major publishers like Forbes and News Corp filed lawsuits, alleging the “theft of a massive volume of copyrighted material.”
The controversy reached its peak with a legal battle between Perplexity and News Corp. Perplexity responded to this controversy by introducing a revenue-sharing model with publishers, which attempted to balance innovation and content rights.
8. Agentic Workflows & AI Agents
If 2023 was about AI chatbots, 2024 marked the emergence of autonomous AI agents. The transformation was remarkable: Salesforce’s Agentforce 2.0, SAP’s Joule, CrewAI, and Google’s Project Astra showed us how AI could move beyond simple responses to actually completing complex tasks autonomously.
The enterprise world eagerly embraced this shift. Agentforce 2.0 demonstrated how AI could enhance reasoning and integration across CRM systems, while SAP’s decision to power Joule with open-source LLMs showed a new approach to customizable enterprise AI. These were digital colleagues capable of understanding context and executing multi-step workflows.
Since these systems can operate autonomously and have a lot of power, many people raised serious questions about control and safety.
As these agents became more capable, the line between assistance and automation grew increasingly blurry, forcing organizations to rethink their AI implementation and governance approach.
9. Google’s Willow Quantum Chip
Remember when quantum computing felt more like science fiction than reality? Google’s Willow chip changed that narrative entirely. With 105 connected superconducting qubits operating at temperatures just above absolute zero, Willow achieved what quantum researchers have been chasing for nearly three decades.
The technical achievements were mind-bending: performing five-minute calculations that would take today’s fastest supercomputers 10 septillion years to complete.
But what really set Willow apart was its breakthrough in error correction. Using larger error-correcting codes, the system could keep a single logical qubit stable for an hour – a vast improvement over previous setups that failed every few seconds.
Yet the path to practical quantum computing remains challenging. While Willow shows immense promise, it still requires millions of qubits to address significant industrial challenges.
The extreme cooling requirements and the difficulty of maintaining quantum states pose serious obstacles to scaling.
10. Google’s Veo 2
When Google DeepMind introduced Veo 2 earlier in December, they raised the bar for AI video generation.
While competitors were still struggling with basic animations, Veo 2 created 4K resolution videos exceeding two minutes, complete with sophisticated camera techniques and cinematic effects.
The technical achievements were impressive. For example, Veo 2 showed better physics modeling, more nuanced human expressions, and better handling of motion and lighting.
In head-to-head comparisons with other leading models, human raters consistently ranked Veo 2’s outputs as more realistic and closer to their intended prompts.
However, Google’s cautious rollout showed us various challenges that exist when deploying such powerful AI video technology. Access remained limited to US users over 18 through the experimental VideoFX tool, which initially constrained outputs to 720p resolution and 8-second clips. Every generated frame from Veo 2 also included SynthID’s invisible watermark, which we now know is important for synthetic media.
The limitations were equally telling. Despite its impressive capabilities, Veo 2 struggled with complex scenes and fast motion sequences.
When these challenges are combined with Google’s measured approach to expansion, we can start to recognize what it takes to balance technological innovation and responsible deployment.
The Bottom Line
The AI developments we saw throughout 2024 included some of the most important to date.
While Apple and Google focused on consumer AI, and Nvidia offered the hardware that powers it all, the real story was the shift from generic AI to specialized, thoughtful implementations.
The introduction of the EU AI Act and the controversies faced by companies like Apple and Perplexity showed us that innovation must be balanced with responsibility.
As quantum computing edged closer to practicality and AI agents became more autonomous, we learned that the future isn’t just about building more powerful AI – it’s about building more reliable, specialized, and accountable AI.
FAQs
What is the biggest breakthrough of AI in 2024?
What’s next for AI in 2025?
What are the problems with AI in 2024?
References
- Apple Intelligence on iPhone in 5 minutes – YouTube (Youtube)
- Apple Intelligence – Apple (Apple)
- Apple urged to scrap AI feature after it creates false headline (Bbc)
- RSF urges Apple to remove its new generative AI feature after it wrongly attributes false information to the BBC, threatening reliable journalism | RSF (Rsf)
- Blackwell Architecture for Generative AI | NVIDIA (Nvidia)
- Nvidia’s design flaw with Blackwell AI chips now fixed, CEO says | Reuters (Reuters)
- Claude | Computer use for coding – YouTube (Youtube)
- Claude 3.5 Sonnet \ Anthropic (Anthropic)
- Dario Amodei — Machines of Loving Grace (Darioamodei)
- EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act (Artificialintelligenceact)
- EU legislators must close dangerous loophole in AI Act – European Institutions Office (Amnesty)
- OpenAI o1 Hub | OpenAI (Openai)
- Project Mariner | Solving complex tasks with an AI agent in the Chrome browser – YouTube (Youtube)
- Gemini – Google DeepMind (Deepmind)
- Sign in – Google Accounts (Labs.google)
- Project Mariner – Google DeepMind (Deepmind)
- Perplexity AI Triples Valuation to $9B With Latest Funding: Report – Business Insider (Businessinsider)
- United States District Court Southern District of New York (Storage.courtlistener)
- Agentforce: Create Powerful AI Agents | Salesforce US (Salesforce)
- The AI Copilot Joule | Artificial Intelligence | SAP (Sap)
- CrewAI (Crewai)
- Project Astra – Google DeepMind (Deepmind)
- Meet Willow, our state-of-the-art quantum chip (Blog)
- Veo 2 – Google DeepMind (Deepmind)