The Risks of AI Browsers: A Cautionary Insight
Understanding the Caution from Gartner
In a recent advisory, Gartner, a leading research and advisory company, raised significant concerns regarding the adoption of AI browsers by organizations. The 13-page report, authored by analysts Dennis Xu, Evgeny Mirolyubov, and John Watts, highlights the potential risks associated with these innovative tools. While AI technologies are designed to enhance user experience, Gartner warns that their autonomous capabilities could pose serious cybersecurity threats.
The Autonomous Nature of AI Browsers
AI browsers have the capacity to navigate online environments and execute transactions independently. This capability can bypass traditional security measures, which raises alarms for organizations concerned about data safety. The analysts pinpoint several risks, including:
- Sensitive Data Leakage: As AI browsers access and process web content, they can inadvertently expose confidential information to unauthorized parties.
- Erroneous Transactions: Autonomous actions taken by the AI can lead to significant errors. For instance, if the browser is misled, it might perform actions that compromise an organization’s assets.
- Credential Abuse: Trusting AI browsers to navigate credentialed sites can result in credential abuse if they are directed to phishing websites.
The Importance of Security Settings
Gartner stresses the need for organizations to scrutinize default settings of AI browsers. Often, these settings prioritize user experience over security, which can create vulnerabilities. Certain types of user data, such as browsing history and active web sessions, might be transmitted to cloud services, heightening the risk of exposure if adequate precautions aren’t established.
The recommendation from Gartner is clear: organizations should implement measures to block AI browsers for now, given the myriad of risks—some of which may still be undiscovered—associated with this emerging technology.
The Underestimated Risks of AI Sidebars
While the report touches on the security risks tied to AI sidebars integrated within these browsers, this aspect appears to receive less emphasis. The functionality powered by large language models (LLMs) is susceptible to indirect prompt injection attacks. Consequently, while LLM-driven features like summarization can be beneficial, they carry their own share of vulnerabilities. The analysts advise caution, noting the potential for sensitive data leakage through these sidebars.
Spotlight on Perplexity Comet and Similar Tools
Gartner analysts paid particular attention to Perplexity Comet as an example of an AI browser showcasing agentic transaction capabilities. Not all AI browsers function in this way; however, Comet and OpenAI’s ChatGPT Atlas stand out for their inherent functionalities.
During their evaluations, the analysts noted that Comet relies on data processed via its servers to fulfill user queries. This means that any sensitive material accessible on the browser could be inadvertently transmitted to the service’s back end, thus exposing the information to unnecessary risk. The concern here is that individuals might view highly confidential data, especially when using advanced features typical of AI browsers.
Educating Employees on Risks
Even if organizations allow the use of AI browsers, employee education becomes vital to mitigating risks. Users must understand that any sensitive content on their screens could potentially be transmitted to external AI services. This awareness becomes crucial when utilizing features like summarization or behavioral automation within these browsers.
Furthermore, there’s a risk that employees might misuse AI browsers for automating tasks, leading to erroneous transactions that can affect internal systems. Misjudgments made by the AI due to inaccurate reasoning can have significant ramifications.
Strategic Recommendations for Companies
Given the substantial risks outlined, Gartner provides several recommendations for organizations contemplating the use of AI browsers:
- Strict Access Controls: Organizations should prevent employees from accessing, downloading, or installing AI browsers through network security measures.
- Risk-Based Approach: For those with lower risk tolerance, blocking installs is essential; however, companies that are more willing to experiment might pursue tightly controlled testing scenarios.
- Data Management Practices: For any pilot projects, analysts recommend disabling features that retain user data within browsers. It is also suggested that employees actively and regularly delete sensitive information stored by AI browsers to minimize exposure.
In summary, while AI browsers represent a remarkable innovation, organizations should approach their adoption with caution, addressing both present and potential risks.


