They support our InsurTech market watch
Become a sponsor: sourcing@astorya.vc
Reports

The next wave of fraud is coming with AI

Security firm Guardio shows how AI-powered agents like Comet can be tricked into phishing, fake e-commerce, and malware attacks—raising new fraud risks for finance and e-commerce.

Original text here from Patrice Bernard (LinkedIn)

The promise of intelligent agents—digital assistants capable of performing everyday online tasks on behalf of their human users—is beginning to take shape in some web browsers. Unfortunately, when security experts at Guardio tested their resilience against common fraud scenarios, the results were disastrous.

Agentic AI may not yet be fully mature, but it’s advancing quickly. Its growing capabilities make it highly appealing to consumers who dream of simplifying online interactions by asking their “internet companion” to handle complex tasks autonomously. But as Guardio’s experiment shows, the question of fraud protection is becoming critical.

Subscribe to our newsletter:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

For their test, researchers selected Comet by Perplexity, one of the first true agentic browsers. They designed three classic attack scenarios familiar to most internet users today: a fake e-commerce site, a phishing email, and a webpage infected with malicious code.

  • Fake Walmart store: The agent repeatedly attempted to purchase an Apple Watch—filling in payment and shipping details—sometimes raising an alert but still proceeding.
  • Phishing email: Faced with a spoofed bank message, the AI followed the link and asked for login credentials, without giving the user a chance to spot red flags.
  • Embedded malicious instructions: When hidden prompts were slipped into content, the AI executed them blindly, falling for the trap like a novice.

The conclusion is worrying but clear: these systems are designed to fulfill user requests at any cost, with little to no safeguard against fraud—even against threats most people now recognize instinctively. They rely only on the base-level protections of their underlying platforms (e.g., Google’s safe browsing flags), which are woefully insufficient.

Guardio’s researchers argue that developers must teach AI agents basic fraud-prevention rules to prevent such vulnerabilities. But they also warn of an inevitable arms race: fraudsters will use AI to refine and adapt their tactics as defenses evolve.

For financial institutions and e-commerce players—the perennial prime targets—the implications could be dramatic. With these new methods, the attack vector is no longer just individuals, each a potential isolated victim, but the AI agents themselves, which may soon serve millions of users. At that scale, the real casualty could be trust in financial and digital institutions.

You may like these articles: