Original text here from Patrice Bernard (LinkedIn)
The promise of intelligent agents—digital assistants capable of performing everyday online tasks on behalf of their human users—is beginning to take shape in some web browsers. Unfortunately, when security experts at Guardio tested their resilience against common fraud scenarios, the results were disastrous.
Agentic AI may not yet be fully mature, but it’s advancing quickly. Its growing capabilities make it highly appealing to consumers who dream of simplifying online interactions by asking their “internet companion” to handle complex tasks autonomously. But as Guardio’s experiment shows, the question of fraud protection is becoming critical.
Subscribe to our newsletter:
For their test, researchers selected Comet by Perplexity, one of the first true agentic browsers. They designed three classic attack scenarios familiar to most internet users today: a fake e-commerce site, a phishing email, and a webpage infected with malicious code.
The conclusion is worrying but clear: these systems are designed to fulfill user requests at any cost, with little to no safeguard against fraud—even against threats most people now recognize instinctively. They rely only on the base-level protections of their underlying platforms (e.g., Google’s safe browsing flags), which are woefully insufficient.
Guardio’s researchers argue that developers must teach AI agents basic fraud-prevention rules to prevent such vulnerabilities. But they also warn of an inevitable arms race: fraudsters will use AI to refine and adapt their tactics as defenses evolve.
For financial institutions and e-commerce players—the perennial prime targets—the implications could be dramatic. With these new methods, the attack vector is no longer just individuals, each a potential isolated victim, but the AI agents themselves, which may soon serve millions of users. At that scale, the real casualty could be trust in financial and digital institutions.