Trust us, if we tried to create a full rundown of all the AI news since January 2025, this wouldn’t be a list — it would be a book.
We’ve lived a lifetime of AI news as the industry advances at breakneck pace. To whittle it down, we’ve focused on the major policies, features, and official announcements from the companies shaping the generative AI era.
So, let’s dive into the biggest AI announcements of the year (so far).
The top new models of 2025
The top AI companies are locked in an AI arms race, and we’re getting major new models on an almost-monthly basis. New models released in 2025 include:
The Stargate Project’s $500 billion infrastructure plan
Two days after he was inaugurated, President Donald Trump underscored his administration’s focus on AI innovation with a massive infrastructure project. The Stargate Project is a $500 billion venture led by OpenAI and SoftBank, along with Microsoft, Nvidia, and Oracle to build AI supercomputers in the United States.
This Tweet is currently unavailable. It might be loading or has been removed.
Not everyone was optimistic about the $500 billion investment, though. “They don’t have the money,” posted Elon Musk, an OpenAI co-founder who is suing the company for attempting to change its corporate structure. (More on that later.)
This Tweet is currently unavailable. It might be loading or has been removed.
DeepSeek R1 made its mark on the AI industry
While the U.S. announced plans to pour hundreds of billions of dollars into AI infrastructure, a Chinese company called DeepSeek claimed to have built its R1 model for a mere $6 million. The true hardware cost is estimated to be much more (possibly over $500 million), since DeepSeek only reported the rental price of its Nvidia GPUs. But the fact that DeepSeek was able to create a reasoning model as good as OpenAI’s models, despite restricted access to GPUs, was enough to shock the AI industry.
Tech stocks took a hit, and Trump declared the moment a “wake-up call” for U.S. tech companies, as the Chinese competitor set a new precedent for the global AI arms race.
Trump’s executive order puts AI education in K-12 schools
Promoting AI innovation has been a major theme of the Trump presidency. And in April, Trump made AI education in schools an official priority with an executive order. The mandate directs federal agencies to implement AI literacy and proficiency in K-12 schools and upskilling programs for educators and relevant professionals.
The executive order aims to prepare future generations to learn the necessary skills for an increasingly AI-centric world. Meanwhile, schools are struggling to navigate the use of AI tools like ChatGPT in the classroom, which has led to a rampant cheating problem. That’s all to say, the AI’s ability to boost productivity and give the U.S. a competitive edge while hindering learning and critical thinking is a tricky dichotomy that’s taken root in the education system.
OpenAI’s corporate structure u-turn
OpenAI was a capped for-profit, governed by a nonprofit board. Then it tried to convert to a fully for-profit corporation, which raised alarm bells from AI leaders like Geoffrey Hinton and former OpenAI employees who warned of the consequences in an open letter.
Mashable Light Speed
The proposed restructuring “would eliminate essential safeguards,” they explained, “effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns.”
Ultimately, OpenAI reversed course… kind of. Instead, the ChatGPT maker announced in May that it would remain governed by a nonprofit board but convert its for-profit subsidiary into a Public Benefit Corporation (PBC), a for-profit corporate structure that legally requires the company to “consider the interests of both shareholders and the mission,” the announcement said.
This Tweet is currently unavailable. It might be loading or has been removed.
However, this new plan was criticized by the same group and others who said the new structure still allows OpenAI to put profit before its altruistic mission, since the nonprofit board would now become a shareholder with a vested interest in the company’s success.
Pope Leo XIV has much to say about AI’s impact on humanity
Days after Pope Leo XIV was chosen as the leader of the Catholic Church, he called out the AI industry. The new pope spoke about “developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor,” in his first cardinal address, conveying a powerful message about his priorities. His name choice even pays tribute to a previous pope, Leo XIII, who advocated for social justice and labor reform during the Industrial Revolution.
Pope Leo XIV has continued to talk about AI’s harms. “It must not be forgotten that artificial intelligence functions as a tool for the good of human beings – not to diminish them, not to replace them,” he said during a June conference on AI governance and ethics in Rome. Tech and religion don’t always coincide, but Leo XIV has made it clear that AI’s impact is a spiritual issue, too.
The AI copyright report made a “pre-publication” impact
One day after the U.S. Copyright Office released a “pre-publication version” of its highly anticipated report on the use of copyrighted works for training AI models, director Shira Perlmutter was fired by President Trump. Perlmutter’s abrupt dismissal immediately prompted speculation, with people wondering whether she knew she was getting fired and rushed to publish a version of the report, or whether she was fired because she published the report, or something entirely unrelated.
This Tweet is currently unavailable. It might be loading or has been removed.
We don’t know what happened, but what’s clear is the Copyright Office was generally favorable to copyright holders. “[M]aking commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” said the report. This doesn’t match up with the wishes of tech companies like Meta and OpenAI that have been lobbying hard for AI model training to be universally considered fair use.
The AI deepfake porn bill became federal law
AI deepfake porn is now a federal crime. The Take It Down Act was signed into law on May 19, making it a criminal act to publish or threaten to publish nonconsensual intimate imagery (NCII), which increasingly includes AI-generated deepfakes. The Take It Down Act moved through Congress pretty quickly, with bipartisan support. The widespread availability of generative AI has made the creation of deepfakes for nefarious purposes disturbingly easy, which eventually caught lawmakers’ attention.
But digital rights groups criticized the bill for being overly broad and risk of false positives. “Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting,” said the Electronic Frontier Foundation (EFF) which added that the bill may have good intentions, but shouldn’t “invent new takedown regimes that are ripe for abuse.”
Google’s AI Mode marks a new era of search
The evolution of Google Search from a list of blue links to an AI-powered search engine has been in the making for a while now. But at this year’s Google I/O, the tech giant made it official with the public launch of AI Mode. Google’s new search tool is a chatbot interface that’s marketed as an alternative to the traditional search homepage (now teeming with AI-generated overviews and summaries of related queries).
As the supreme titleholder of the search engine market, Google’s introduction of AI Mode represents a fundamental shift in the way people find information online. Users were already turning to ChatGPT or AI search engine Perplexity, just as the quality of Google search results got worse. Google’s solution was to lean into AI-powered search features to compete more directly, despite known hallucination issues and alienating publishers who say the new AI search features are tanking their traffic.
OpenAI and Jony Ive team up to build an AI companion
The future of AI is screenless, according to Sam Altman and Jony Ive. In May, OpenAI announced the acquisition of Jony Ive’s company and plans to develop an AI device together. OpenAI will try to succeed where others have failed: creating a device that’s evolved beyond phone and computer screens that experiences the world as you do, becoming the ultimate AI companion.
Details are still scant, but a leaked recording on an internal meeting describes it as a “third core device a person would put on a desk after a MacBook Pro and an iPhone.” More recently, all mention of Jony Ive’s startup io was scrubbed from the OpenAI site after a trademark lawsuit was filed by AI-powered earbuds company iyO. But OpenAI says the partnership is still on.
Mark Zuckerberg goes on a shopping spree
Recently, the New York Times reported that Meta CEO Mark Zuckerberg is offering up to $100 million contracts to poach key talent away from OpenAI and other competitors. Per the Times, Zuckerberg is chasing “godlike technology” and super-intelligent AI. The Facebook founder is aware that Meta lags behind its rivals in the AI race, and he’s determined to build an AI supergroup.
Disney enters the AI copyright battle
Many of the copyright lawsuits against AI companies have been filed by journalists and artists. Recently, both Meta and Anthropic won copyright suits against authors. However, this summer, a new and fearsome combatant has entered the AI copyright legal battle: The House of Mouse. Disney has sued AI image generator Midjourney, one of dozens of lawsuits focused on AI and copyright law. The Disney suit calls Midjourney a “bottomless pit of plagiarism.”
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.