• Home  
  • AI browsers are a cybersecurity time bomb
- Technology

AI browsers are a cybersecurity time bomb

Web browsers are getting awfully chatty. They got even chattier last week after OpenAI and Microsoft kicked the AI browser race into high gear with ChatGPT Atlas and a “Copilot Mode” for Edge. They can answer questions, summarize pages, and even take actions on your behalf. The experience is far from seamless yet, but it […]

Web browsers are getting awfully chatty. They got even chattier last week after OpenAI and Microsoft kicked the AI browser race into high gear with ChatGPT Atlas and a “Copilot Mode” for Edge. They can answer questions, summarize pages, and even take actions on your behalf. The experience is far from seamless yet, but it hints at a more convenient, hands-off future where your browser does lots of your thinking for you. That future could also be a minefield of new vulnerabilities and data leaks, cybersecurity experts warn. The signs are already here, and researchers tell The Verge the chaos is only just getting started.

Atlas and Copilot Mode are part of a broader land grab to control the gateway to the internet and to bake AI directly into the browser itself. That push is transforming what were once standalone chatbots on separate pages or apps into the very platform you use to navigate the web. They’re not alone. Established players are also in the race, such as Google, which is integrating its Gemini AI model into Chrome; Opera, which launched Neon; and The Browser Company, with Dia. Startups are also keen to stake a claim, such as AI startup Perplexity — best known for its AI-powered search engine, which made its AI-powered browser Comet freely available to everyone in early October — and Sweden’s Strawberry, which is still in beta and actively going after “disappointed Atlas users.”

In the past few weeks alone, researchers have uncovered vulnerabilities in Atlas allowing attackers to take advantage of ChatGPT’s “memory” to inject malicious code, grant themselves access privileges, or deploy malware. Flaws discovered in Comet could allow attackers to hijack the browser’s AI with hidden instructions. Perplexity, through a blog, and OpenAI’s chief information security officer, Dane Stuckey, acknowledged prompt injections as a big threat last week, though both described them as a “frontier” problem that has no firm solution.

“Despite some heavy guardrails being in place, there is a vast attack surface,” says Hamed Haddadi, professor of human-centered systems at Imperial College London and chief scientist at web browser company Brave. And what we’re seeing is just the tip of the iceberg.

With AI browsers, the threats are numerous. Foremost, they know far more about you and are “much more powerful than traditional browsers,” says Yash Vekaria, a computer science researcher at UC Davis. Even more than standard browsers, Vekaria says “there is an imminent risk from being tracked and profiled by the browser itself.” AI “memory” functions are designed to learn from everything a user does or shares, from browsing to emails to searches, as well as conversations with the built-in AI assistant. This means you’re probably sharing far more than you realise and the browser remembers it all. The result is “a more invasive profile than ever before,” Vekaria says. Hackers would quite like to get hold of that information, especially if coupled with stored credit card details and login credentials often found on browsers.

Another threat is inherent to the rollout of any new technology. No matter how careful developers are, there will inevitably be weaknesses hackers can exploit. This could range from bugs and coding errors that accidentally reveal sensitive data to major security flaws that could let hackers gain access to your system. “It’s early days, so expect risky vulnerabilities to emerge,” says Lukasz Olejnik, an independent cybersecurity researcher and visiting senior research fellow at King’s College London. He points to the “early Office macro abuses, malicious browser extensions, and mobiles prior to [the] introduction of permissions” as examples of previous security issues linked to the rollout of new technologies. “Here we go again.”

Some vulnerabilities are never found — sometimes leading to devastating zero-day attacks, named as there are zero days to fix the flaw — but thorough testing can slash the number of potential problems. With AI browsers, “the biggest immediate threat is the market rush,” Haddadi says. “These agentic browsers have not been thoroughly tested and validated.”

But AI browsers’ defining feature, AI, is where the worst threats are brewing. The biggest challenge comes with AI agents that act on behalf of the user. Like humans, they’re capable of visiting suspect websites, clicking on dodgy links, and inputting sensitive information into places sensitive information shouldn’t go, but unlike some humans, they lack the learned common sense that helps keep us safe online. Agents can also be misled, even hijacked, for nefarious purposes. All it takes is the right instructions. So-called prompt injections can range from glaringly obvious to subtle, effectively hidden in plain sight in things like images, screenshots, form fields, emails and attachments, and even something as simple as white text on a white background.

Worse yet, these attacks can be very difficult to anticipate and defend against. Automation means bad actors can try and try again until the agent does what they want, says Haddadi. “Interaction with agents allows endless ‘try and error’ configurations and explorations of methods to insert malicious prompts and commands.” There are simply far more chances for a hacker to break through when interacting with an agent, opening up a huge space for potential attacks. Shujun Li, a professor of cybersecurity at the University of Kent, says “zero-day vulnerabilities are exponentially increasing” as a result. Even worse: Li says as the flaw starts with an agent, detection will also be delayed, meaning potentially bigger breaches.

It’s not hard to imagine what might be in store. Olejnik sees scenarios where attackers use hidden instructions to get AI browsers to send out personal data or steal purchased goods by changing the saved address on a shopping site. To make things worse, Vekaria warns it’s “relatively easy to pull off attacks” given the current state of AI browsers, even with safeguards in place. “Browser vendors have a lot of work to do in order to make them more safe, secure, and private for the end users,” he says.

For some threats, experts say the only real way to keep safe using AI browsers is to simply avoid the marquee features entirely. Li suggests people save AI for “only when they absolutely need it” and know what they’re doing. Browsers should “operate in an AI-free mode by default,” he says. If you must use the AI agent features, Vekaria advises a degree of hand-holding. When setting a task, give the agent verified websites you know to be safe rather than letting it figure them out on its own. “It can end up suggesting and using a scam site,” he warns.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


First Appeared on
Source link

Leave a comment

Your email address will not be published. Required fields are marked *

isenews.com  @2024. All Rights Reserved.