If you’re thinking of letting your employees use AI browsers like Comet and Atlas, think again.
That’s the advice in a recent report from influential global technology advisory firm Gartner.
“Agentic browsers, or what many call AI browsers, have the potential to transform how users interact with websites and automate transactions while introducing critical cybersecurity risks,” explained the report written by Gartner analysts Dennis Xu, Evgeny Mirolyubov, and John Watts.
“CISOs must block all AI browsers in the foreseeable future to minimize risk exposure,” they wrote.
MJ Kaufmann, an author and instructor with O’Reilly Media, an operator of a learning platform for technology professionals, in Boston, noted that AI browsers create risk by hoovering a user’s data.
“AI browsers create a security problem because their sidebars can unintentionally capture whatever is visible in an employee’s open tabs, sending sensitive data like internal tools, credentials, or confidential documents to an external AI back-end without the user realizing it,” she told TechNewsWorld.
An AI browser has a unique understanding of what you’re doing in a way that very few platforms do, added Alex Lisle, CTO of Reality Defender, a developer of AI-powered tools to detect deepfakes and synthetic media, in New York City.
“When you think about websites, they are siloed by each browser tab,” he told TechNewsWorld. “That’s not the case with an AI browser. It understands all the tabs that are open, it understands all the data in them, and uses this to create a better context for you. It’s trying to make your life easier, but at the same time it’s slurping up that vast quantity of data.”
Dan Pinto, CEO and co-founder of Fingerprint, a device intelligence and browser fingerprinting company in Chicago, also pointed out that with AI browsers, the AI assistant becomes part of the browsing experience, interpreting pages and acting on hidden instructions, even if they’re malicious, because that’s what it was designed to do.
“The danger is that the AI assistant may take action on a user’s behalf,” he told TechNewsWorld. “This can include clicking on malicious links, filling out forms, and sending valuable personal information — all without the user being aware.”
Breaking Security Assumptions
The concern Gartner raised about AI browsers sending information such as active web content, open tabs, and even browsing history to a cloud back-end is a real security issue, agreed Chris Anderson, CEO of ByteNova, a developer of edge AI technologies, in San Francisco.
“Most people don’t fully grasp how much sensitive information sits in their browser at any moment,” he told TechNewsWorld. “That data isn’t always something you can just ‘reset’ if it leaks. Internal dashboards, financial portals, patient records, you name it. Once it’s out, it’s out.”
AI browsers are moving beyond passive assistance into autonomous action, putting traditional browser security models under strain.
As organizations rapidly adopt agentic AI, the Model Context Protocol (MCP) and autonomous browsing capabilities, a pattern is emerging, observed Randolph Barr, CISO of Cequence Security, a global API security and bot management company.
“AI-native browsers are introducing system-level behaviors that traditional browsers have intentionally restricted for decades,” he told TechNewsWorld. “That shift breaks long-standing assumptions about how secure a browser environment is supposed to be.”
He waved a red flag over another pattern. “The real exposure emerges when individuals install AI browsers on their personal devices,” he said. “We know from every technology adoption wave — cloud apps, messaging platforms, AI assistants — that employees first test these tools at home.”
“With AI browsers,” he continued, “curiosity will drive rapid experimentation. Once users become comfortable with these tools at home, those behaviors inevitably bleed into the workplace through BYOD access, browser sync features, or personal devices used for remote work.”
“What’s more concerning is how easy AI browsers are to detect and how quickly adversaries can scale that detection,” he added. “AI browsers introduce unique fingerprints in their APIs, extensions, DOM behavior, network patterns, and agentic actions. Attackers can identify them with a few lines of JavaScript or by probing for AI-specific behaviors that differ from traditional browsers.”
“With AI-driven classification models, bad actors can now fingerprint AI browsers across millions of sessions automatically,” he explained. “At scale, that enables targeted attacks against users running these higher-risk, agent-enabled environments.”
He warned that AI browsers are evolving faster than the guardrails that traditionally protect end users and corporate environments.
“Transparency around system-level capabilities, independent audits, and the ability to fully control or disable embedded extensions are table stakes if these browsers want to be considered for regulated or sensitive workflows,” he said. “We are approaching a future where the use of AI agents will outpace the readiness of security measures.”
“Advisories like Gartner’s help highlight the gaps and hopefully drive the industry toward more secure, transparent designs before these tools become deeply embedded in enterprise ecosystems,” he added.
Assess AI Back-End
Gartner also noted that it’s possible to mitigate AI browser risks by assessing the back-end AI services that power an AI browser to determine whether the security measures in place are acceptable to an organization.
“In practice, this advice is extremely challenging,” maintained Will Tran, vice president for research at Spin.AI, a developer of SaaS security solutions, in Palo Alto, Calif. “Proprietary AI models are ‘black boxes.’ The vendor will not allow customers to audit the model’s internal workings, its training data, or its specific prompt processing logic.”
“There are also articles indicating that the AI vendors themselves do not fully comprehend the black box they’ve created,” he told TechNewsWorld.
“While this advice makes sense, I don’t think it’s practical at all,” added Akhil Verghese, co-founder and CEO of Krazimo, a provider of curated artificial intelligence development and consulting services, in Dover, Del.
“AI browsers are pretty closed off about their back-ends or any processing that happens before the AI provider even looks at the data,” he told TechNewsWorld. “The terms of service of the models or the browser may change. Is it really practical to expect individuals to stay on top of that?”
Employee Training Isn’t Enough
Even if an organization believes an AI browser provider addresses its risk concerns, Gartner recommends that employees be educated that anything they are viewing could potentially be sent to the AI service back-end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions.
“Educating people about this is critical; however, you cannot stop at simply telling them once,” said Erich Kron, CISO advisor at KnowBe4, a security awareness training provider, in Clearwater, Fla.
“This is a message that will need to be repeated on a regular basis so that it is fresh in the minds of employees when they are using these browsers,” he told TechNewsWorld. “If we don’t continue to remind employees, they are simply going to get tied up in doing their work and forget the warning.”
Education, though, may not be enough to prevent employees from leaking data through AI browsers. “With so much potential for gaining efficiency by using AI to automate routine tasks, it may not be realistic to expect that employees will adjust their practices when they don’t see potential harm in the kinds of data that they are working with,” contended Chris Hutchins, founder and CEO of Hutchins Data Strategy Consultants, a healthcare-focused advisory firm, in Nashville, Tenn.
“This can be a shadow IT problem and create further problems when IT and info security have no visibility into what data is being used, how it is being used, or where it is going,” he told TechNewsWorld.
However, Lionel Litty, CISO and chief security architect at Menlo Security, a browser security provider in Mountain View, Calif., cautioned that even if an organization trusts its AI browser vendor and is comfortable with data sharing, it needs hard guardrails around how the browser operates.
“Limit the sites it can reach, apply strict DLP controls and scan anything it downloads,” he told TechNewsWorld. “And make sure you have a strategy to defend these browsers against vulnerabilities. They can be led astray to dark corners of the web, and URL filtering alone isn’t enough.”
