The Interface Wars: OpenAI takes on Google

As the fight for dominance between AI-based web browsers and traditional search engines heats up, we examine the issues and implications.

Written by James Richards | Edited by Peter Franks

For two decades, Google defined how we see the internet. But as OpenAI launches a browser that doesn’t just index the web but interprets it, a new struggle for the digital interface begins. What happens when search becomes conversation, and discovery happens inside a machine’s mind rather than the open web?

In the beginning, there was the box. A simple white rectangle, centred on a plain page, inviting the world to ask questions. When Google appeared in the late 1990s, it offered something that felt almost civic, an organising principle for the chaos of the early web. Its PageRank algorithm turned curiosity itself into a commodity. Within a decade, the act of searching had become synonymous with knowing.

Google’s genius was to build the nervous system of the modern internet. By crawling billions of pages and ranking them by relevance, it transformed information into infrastructure. Every click fed a data loop; every query became a signal. The company monetised those signals with advertising so precise that it made traditional media look blindfolded. Search became not only a public utility but a trillion-dollar industry.

That success came with a paradox. Google’s interface felt open in the sense anyone could search anything; yet the results page soon became its own gatekeeper. The first few links drew almost all attention. Page 1 became a prize, Page 2 oblivion. For publishers, referral traffic from Google was oxygen. For Google, those publishers were an expendable ecosystem that kept the ad machine humming.

Now, with its Gemini model fused into Search, the familiar results page is under strain. Rather than simply pointing users to websites, Google increasingly supplies answers itself. What once functioned as a gateway now risks becoming a wall. The irony is unmistakable: the company that once broke monopolies by opening up the web may be building one by enclosing it.

The Challenger Arrives

OpenAI’s move into browsing might sound incremental, but the ambition is existential. Its Atlas browser, released in October 2025, doesn’t merely point to the web - it reads it. Ask a question, and the system searches, synthesises and interprets in a single act. What appears on screen is not a list of links but a reasoned response: part essay, part briefing note.

Users describe the experience as talking to a researcher. Rather than typing discrete queries, they converse by asking follow-ups, probing contradictions, refining tone. The rhythm of information gathering becomes iterative, like a loop of dialogue rather than a sequence of searches.

In comparison, Google’s Gemini interface feels like a retrofit: AI grafted onto a legacy engine. OpenAI’s browser feels native to the new medium. Where Google offers answers, OpenAI offers understanding. The distinction is subtle yet profound: it shifts the centre of gravity from retrieval to reasoning.

This is not just a battle of technology stacks. It is a fight over the interface itself, the space where human intention meets machine cognition. And it raises an unsettling question: if knowledge can be interpreted rather than indexed, who decides what that interpretation means?

Image illustrating the conflict between the Google search-based internet, and the AI based version
AI overviews and browsers pose a direct challenge to the Google search-based internet.

The Conversation Paradox

At first glance, conversational AI looks like the great leveller. Type causes of global warming into Google and you will meet a phalanx of SEO specialists. Ask the same question to ChatGPT, and you might get a nuanced synthesis that cites academic papers, NGO reports and obscure climate blogs. It feels more inclusive, like a chorus of hidden voices given form.

Yet this apparent plurality hides a paradox. The user receives one answer. The web’s cacophony is distilled into a single paragraph, coherent and convenient but shorn of texture. The hyperlinks that once allowed users to cross-reference claims or explore dissenting views are often replaced by citations folded invisibly into prose.

It is almost like a librarian who has read the entire collection explaining a concept to you without showing you the shelves.

Some see this as the inevitable trade-off of abundance. When information becomes infinite, curation becomes essential. But the risk is epistemic: a single interpretive layer now mediates between reader and source. We no longer browse; we are briefed.

From Walled Garden to Curated Web

Much commentary describes this shift as the birth of a new walled garden. The analogy feels only half-right. What OpenAI is building is less an enclosure than a curated web, a space filtered not by popularity but by contextual relevance.

Google’s internet has long been a vast department store with a single gleaming shopfront. In theory, there is a warehouse beneath containing everything. But in practice, most customers see only the top shelf. Visibility is purchased through backlinks and search-engine optimisation. Relevance is defined by scale.

Conversational AI changes that dynamic. Because users engage in multi-step exchanges, the system can surface specialist material as context evolves. A small travel firm offering bespoke Turkish itineraries may emerge naturally within a dialogue where Google would have buried it beneath Booking.com and Expedia’s corporate listings. To develop the metaphor of the librarian, LLMs almost act like discerning advisors or connoisseurs, capable of recommending the esoteric amid the obvious.

Yet curation cuts both ways. With Google, the act of clicking through to a site allowed us to judge its credibility. With a language model, that evaluation happens inside the black box. When we ask for news, analysis or explanation, we no longer see the raw pages, only the distillation. The result may be more personalised, but it also becomes harder to audit.

Good curation is certainly desirable in a world drowning in content. But when one company’s model becomes the primary curator of knowledge, the line between helpful synthesis and subtle control begins to blur.

Large language models almost act like discerning advisors or connoisseurs, capable of recommending the esoteric amid the obvious.

The Economic Fallout

In one sense, Google’s pivot towards AI summaries is puzzling, almost self-defeating: why introduce summaries that divert users from the very ad-supported web on which its empire is built? One answer is strategic survival. Faced with the rapid rise of conversational AI and assistants such as ChatGPT, Google may be repositioning itself before the interface slips away. In an official blog from May 2024, Google's VP search, Elizabeth Reid, said: “With our custom Gemini model people use Search more, and are more satisfied with their results”. This suggests that in Google’s current logic, weaker ad clicks today may be offset by stronger user retention tomorrow.

The move also defends relevance. If people are going to get their answers from an AI, Google would rather that AI live inside it’s search than outside it, in the sense it’s better to cannibalise yourself than be eaten whole. The strategy trades short-term revenue for long-term control of the discovery layer. The danger, of course, is that in protecting the interface, Google undermines the ecosystem that made it valuable in the first place.

Falling Ad Revenues

The first casualties of this new interface war are already visible. In one membership survey, published by trade organisation, Digital Content Next (DCN) in August, premium publishers reported traffic declines of 10 percent or more in just eight weeks (median year-on-year referral traffic from Google Search), with non-news brands down 14 percent, and news brands down 7 percent. According to the survey, some specialist publishers reported declines of up to 50 percent following the introduction AI summaries. This tallies with the results of a recent Pew Research Center report which confirmed that web users are less likely to click through to the source page when an AI summary appears in search results. For newsrooms that depend on click-through advertising, this is an existential threat; why indeed should anyone visit the Financial Times if Google can sum up its reporting in a single sentence?

Advertising networks are also feeling the squeeze. The old economy of impressions (eyeballs converted to ad spend) does not map neatly onto conversational AI. There are no banner ads in a chat window. Licensing data to train the models offers a temporary revenue stream, but it is dwarfed by the losses in traffic and brand visibility. But search engines themselves face an existential reckoning: if users get satisfactory answers from an AI interlocutor, why would they return to a results page at all?

Not everyone loses, however. Small and medium-sized enterprises may, paradoxically, fare better. Because large language models integrate user sentiment and product quality into their reasoning, they are less beholden to pure scale. A well-reviewed niche brand could surface more easily in an AI recommendation than in Google’s SEO arms race. For the first time in years, authenticity might trump advertising budget.

The real pressure falls on incumbents, the mid-tier e-commerce giants that thrived on keyword dominance but offered little differentiation. As AI begins to synthesise product comparisons, those undistinguished middlemen may simply disappear from view.

Decline in referral traffic after AI summaries introduced – eight-week change Premium −10%, Non-news −14%, News −7%, Specialist up to −50%. Referral Traffic Takes Hit from AI Summaries The impact of AI summaries on Google Search referral clicks: eight-week snapshot from May to June 2025 0% −10% −20% −30% −40% −50% −60% Traffic decline (%) −10% −14% −7% −50% Premium publishers Non-news brands News brands Specialist publishers Source: DCN

Referral traffic decline after AI summaries

Eight-week snapshot (May-June 2025). Values show change in referral clicks.

Category 8-week change
Premium publishers −10%
Non-news brands −14%
News brands −7%
Specialist publishers −50%

Source: DCN

Categories are based on membership lists compiled by DCN. 'Premium publishers' denotes major subscription-oriented editorial brands (e.g. FT, NYT). 'News brands' refers to high-reach general-news outlets (e.g. AP, NPR, Washington Post). 'Non-news brands' are service-or lifestyle-led content providers (e.g. WebMD, AARP). 'Specialist publishers' cover niche or regional expert outlets (e.g. Texas Tribune, Voice of San Diego). Classification is illustrative — DCN does not publicly map each member into these categories.

Regulating the truth?

For regulators, the challenge becomes more abstract. Antitrust frameworks were built to police markets, yet here the commodity is information itself. When discovery, interpretation and delivery are vertically integrated within a single model, what exactly is there to regulate: the data, the algorithm, or the interface?

The point is when models such as Atlas or Gemini become the default gateway to knowledge, their design decisions - how they weigh evidence, which sources it cites, how they handle bias - take on regulatory significance. Traditional competition law assumes that consumers can see the product they are buying and that rivals can enter the market. In the age of AI browsers, both assumptions begin to collapse. The user no longer encounters competing suppliers of information; they encounter a single synthesis. The model’s internal decisions are opaque, proprietary and constantly changing.

Europe has tried to get ahead of this problem with legislation such as the Digital Markets Act and the forthcoming AI Act, which demand greater transparency in algorithmic decision-making. The US, by contrast, still treats generative AI as a largely unregulated innovation space, relying on antitrust suits against dominant platforms rather than defining what digital monopoly power now means. China takes a different route, regulating content more than competition in the interests of political stability rather than market openness.

None of these models yet fits the new reality. The risk is not simply monopoly in the economic sense but something closer to epistemic capture: a world in which one or two companies mediate what billions of people can know. For policymakers, the battle for fair competition may soon become indistinguishable from the battle for objective truth.

In a world where AI models mediate access to knowledge, the open web becomes a filtered doorway.

The Death and Rebirth of SEO

For all the talk of antitrust and ad revenues, the real upheaval may come in how visibility itself is earned. For two decades, search-engine optimisation was the hidden grammar of the web. It shaped how brands wrote, designed and even thought. To be seen was to be ranked; to be ranked was to obey Google’s algorithmic commandments. Entire industries emerged to decode them. ‘Content strategy’ often meant gaming the system politely.

But if search is no longer a list of links, what becomes of SEO? In a world where users talk to an AI browser instead of typing keywords, there is no page one. There is only one answer, or perhaps one conversation. Traditional optimisation tactics, from backlinks to keyword density, lose some of their gravitational pull.

However, most large language models, including those behind OpenAI’s Atlas, rely on retrieval-augmented generation (RAG) - a technique that allows the model to query the live web or connected databases for up-to-date information. In other words, the machine still ‘searches’, but its retrieval process is embedded within the act of generation.

Therefore, reputable backlinks, consistent brand presence, and credible online proof remain crucial signals for these systems. Arguably, SEO is not dying; it is evolving into AI optimisation - a discipline focused on making information discoverable and trustworthy to both humans and machines.

Atlas and similar AI browsers do not reward visibility in quite the same way. They prize authority, coherence and helpfulness as interpreted by their models, not by Google’s index. What matters is whether the information feeds the AI’s reasoning loop. A brand’s content might still influence responses, but indirectly - through training data, verified APIs or retrieval plug-ins - rather than through meta-tags and keyword engineering.

That marks a philosophical shift from optimising for discovery to optimising for inclusion. The question is no longer “How do we rank higher?” but “How do we become part of the model’s understanding of our field?”

SEO is not dying; it is evolving into AI optimisation - a discipline focused on making information discoverable and trustworthy to both humans and machines.

Towards Language-Model Optimisation

Some agencies have begun speaking of language-model optimisation (LMO), an emerging discipline aimed at ensuring that brand information is accessible to AI systems, cited accurately in summaries, and reinforced through high-trust sources. Instead of building backlinks for their own sake, marketers may cultivate semantic relevance: producing material that models recognise as consistent, expert and widely corroborated.

Backlinks still matter - not for their technical mechanics, but for what they signify. A mention or feature in a trusted publication such as The Times carries reputational weight that a crawler or model can detect, even without a “dofollow” link. In the AI era, what matters most isn’t whether a reputable source links to you, but whether it talks about you in a credible context. The association itself becomes the signal of authority.

Attribution, too, will not vanish overnight. Conversion (the measure how many visitors become customers) remains a core marketing metric. Large language models are already driving traffic via cited sources and retrieval outputs; the challenge is measuring those flows. Click-through rates and bounce times may fade as indicators, but engagement and conversion will endure as the key proofs of effectiveness.

Paradoxically, the evolution of SEO may bring it closer to its original purpose. Stripped of the mechanical theatre of link farms and keyword inflation, visibility will depend once again on producing genuinely useful material. AI systems are trained to value clarity, credibility and depth. In that sense, the age of Atlas may finally reward what the web always claimed to prize: quality over quantity.

The New Map of Knowledge

With the introduction of AI-enabled browsers, both Google and OpenAI are reshaping how information is organised and understood. What used to be called a search result is becoming something else entirely.

Google still works by ranking content: authority emerges from popularity, using the logic of the hyperlink. OpenAI works by generating interpretations: authority comes from probability, using the logic of the model.

In Google’s world, knowledge lives in visible networks of sources. In OpenAI’s world, it lives inside the weighted connections of a neural system trained on billions of words. The two approaches overlap, but they are philosophically distinct. One displays its sources; the other absorbs them.

This makes authority harder to check. A model’s answer may be coherent, but the reasoning behind it is not always clear. When an AI explains inflation or climate policy, it is not pointing to a chain of links. It is producing a synthesis built from statistical patterns the user cannot see. The chain of custody for the underlying knowledge becomes obscured.

For users, this is a profound shift. Context becomes something manufactured rather than something explored. Knowledge arrives pre-packaged as an experience you are given, not a set of sources you navigate.

Google draws on knowledge from authoritative sources; AI creates a response
drawn from the sum of its training data (supplemented by RAG), based on probability.

The Future of Discovery

If conversation replaces search, will we still browse? Or will browsing become something only the machines do on our behalf?

The likely answer is hybrid. LLMs will handle the first sweep, the reconnaissance phase, while humans dip into the details when curiosity strikes. Yet the logic of convenience is hard to resist. The more capable the interface, the less incentive there is to step outside it.

For small creators, the new regime could be unexpectedly liberating. Training data and retrieval plug-ins can surface niche expertise that SEO never rewarded. A ceramicist’s blog post about glaze chemistry, once buried on page 17 of Google, might now inform a model’s synthesis seen by millions. Visibility will come not through linking but through learning.

The open web, however, risks receding into the background. Once data is absorbed into model weights, it becomes part of the informational substrate, which is mined but rarely visited. Future generations may encounter the web as a kind of living archive, accessible only through the mediation of AI.

Then comes the question of money. Advertising sustained the old web; what sustains the new one? Conversational platforms depend on trust and the illusion of impartiality. Yet monetisation inevitably seeks leverage. Sponsored knowledge capsules? Paid prominence within summaries? Subtle weighting of product mentions? However it manifests, the appearance of neutrality will be fragile.

Coda: The Irony of Access

Two decades ago, Google promised to make the world’s information universally accessible and useful. It succeeded so completely that to google became a verb. Now, as the company confronts its first credible rival in interface design, that same ambition turns back on itself. Accessibility becomes mediation; openness becomes curation.

OpenAI’s challenge is not merely to dethrone Google but to decide what replaces the open web as our collective memory. The risk is not that information will disappear, but that it will become something we consume rather than explore. The web will remain infinite, but will start to become invisible.

The Interface Wars are not about who owns the servers or the data. They are about who controls the act of curiosity itself, and whether, in the age of intelligent browsers, curiosity remains a human privilege at all.

James Richards headshot

James Richards

Lead Writer, No Latency

James is a professional writer and editor with a background in journalism and publishing, specialising in clear, structured writing on complex technical and commercial subjects.

He has over fifteen years’ experience working across journalism, publishing and professional writing, producing content for both B2B and B2C audiences. His work spans technology, finance and professional services, combining narrative discipline with a deep respect for accuracy and tone.

Peter Franks headshot

Peter Franks

Founder & Editor, No Latency

Peter writes long-form analysis on technology, gaming and artificial intelligence - focusing on the systems, incentives and strategic decisions shaping the modern software economy.

He has spent 20+ years working with software and games companies across Europe, advising founders, executives and investors on leadership and organisational design. He is also the founder of Neon River, a specialist executive search firm.