An ideologically wide range of news outlets now stand to make some money off Meta’s obsession with AI. CNN, Fox News, USA Today, The Daily Caller, People, Le Monde, and others have signed on to bring “real-time content on Meta AI.”
Partnering means paying; Meta’s plans to compensate those publishers an undisclosed amount, Axios media reporter Sara Fischer confirms. It’s the latest in a series of moves by the operators of AI services to pay sites for access to their content.
A tracker of AI deals maintained by Columbia Journalism School’s Tow Center for Digital Journalism lists 128 such arrangements between AI operators and news publishers since July 2023, including such high-profile tie-ups as OpenAI’s deal with the Financial Times and Perplexity paying the Washington Post, the Los Angeles Times, and other publishers for inclusion in its Comet browser’s premium service.
(Tow’s tracker also counts 21 lawsuits filed by publishers against AI providers in that time, including the lawsuit PCMag’s parent company Ziff Davis filed against OpenAI in April 2025 alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
But all of these deals, plus similar ones with non-news sites like the content-licensing contracts Google and OpenAI inked with Reddit in 2024, have one unfortunate thing in common: They leave out smaller sites that can’t afford lawyers to negotiate with the likes of Google and Meta.
And small and large sites seem equally exposed to the risk of AI-enhanced search results giving web users enough information to save them from having to click through to a search result. In a study published this summer, the Pew Research Center found that Google’s AI Overview search results diminished the clickthrough rate among survey respondents from 15% to 8%.
Google has repeatedly said that it’s not seeing an overall drop in clickthrough traffic and that AI Overview sends sites a little more “high-quality” clicks, meaning ones that result in more time spent at the site. It has yet to publish numbers documenting that second claim.
Court rulings have not yielded a legal consensus about how much an AI platform should be able to reuse the work of humans.
In February, one federal judge ruled that a now-defunct AI startup infringed Thomson Reuters’ copyrights when it leveraged content from that firm’s Westlaw reference to create a competing service. In June, another ruled that Anthropic buying books and scanning them to train its Claude AI platform met fair-use criteria, but Anthropic downloading copies of books from a trove of pirated works did not.
The crawlers that read sites to provide data for training AI models can also impose bandwidth costs on those sites. In April, Wikipedia warned that an onslaught of these AI bots—largely “automated programs that scrape the Wikimedia Commons image catalog of openly licensed images to feed images to AI models”—was eating into its server costs and capacity.
And the automated results of all this AI crawling and scraping can wind up harming both online creators and their former readers. A Nov. 25 Bloomberg story recounted how AI summaries of recipes often leave readers with incorrect instructions while doing enough damage to the traffic of food bloggers that one lamented that “I’m going to have to find something else to do.”
Breaking the Fundamental Business Model of the Internet
In July, the internet-services company Cloudflare, which already lets sites using its services (even the free tier) block AI-crawler bots, announced a new “pay per crawl” feature, which lets site owners grant access to AI crawlers from sites that pay for that access.
In a panel at the Web Summit conference in Lisbon in November, Cloudflare CEO Matthew Prince called it a badly needed response to an existential threat to the internet we’ve known. “If these new AI tools aren’t generating traffic, then the fundamental business model of the internet is going to break down,” Prince told his onstage interviewer, Fortune executive editor Jim Edwards, who replied that Fortune has seen AI do just that: “It’s reducing readership, certainly, it’s making revenue harder.”
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.
Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Prince, however, said he’d seen a recognition among most AI developers that they can’t only take: “When we have conversations with the AI companies, with one very notable exception, they are all saying we have to pay for this content.”
You can probably guess the exception.
Calling this one company both “the great patron of the internet for the last 27 years” and “the great villain of the internet today,” Prince said Google makes it impossible for sites to permit its essential web indexing but block its AI crawling using standard robots.txt files, because the same bot does both tasks.
“They need to play by the same rules as everyone else and split their crawler so that search and AI are two separate things,” he said.
Recommended by Our Editors
Prince then suggested that Google was open to that idea: “I guarantee you that immediately after I get offstage, I will be having this conversation with senior Google executives.”
Google declined to provide a comment on Prince’s talk. The company does allow site owners to block Google from using their content to train its Gemini AI platform, but that does not affect AI Overviews. A separate “nosnippet” option blocks Google from displaying a brief text preview of a page’s content but affects both Google’s traditional search as well as its AI Overviews.
Cloudflare did not name any AI companies now making payments to site owners via Pay Per Crawl, citing this feature’s private-beta status.
An executive with a trade group for small online newsrooms couldn’t offer any details about member uptake of this option.
“I do not know—and can’t get clarity on—which if any are using the anti-crawling tool,” emailed Chris Krewson, executive director of LION Publishers (the abbreviation is short for “local independent online news”). He did note that Cloudflare had tried to sell LION on adopting it, which he took as evidence of limited early adoption.
Another possibility for smaller sites and solo creators could be the Really Simple Licensing standard now backed by a coalition of larger online properties including Reddit, Yahoo, and Ziff Davis, which would let sites post terms for AI use of their content—and which could work with Cloudflare’s AI bot blocking or a similar screen acting as an enforcer.
Toward the end of his Web Summit panel, Prince suggested that even AI developers weary of being leapfrogged by rivals should welcome being required to pay for access–because that could let them stand out by buying better content.
“What’s going to differentiate them?” he asked and then shared his own answer: “Do they have access to original and unique content?”
About Our Expert
Experience
Rob Pegoraro writes about interesting problems and possibilities in computers, gadgets, apps, services, telecom, and other things that beep or blink. He’s covered such developments as the evolution of the cell phone from 1G to 5G, the fall and rise of Apple, Google’s growth from obscure Yahoo rival to verb status, and the transformation of social media from CompuServe forums to Facebook’s billions of users. Pegoraro has met most of the founders of the internet and once received a single-word email reply from Steve Jobs.
Read Full Bio
