Amazon Issues Cease‑And‑Desist to Perplexity Over Agentic Shopping Bot
Amazon’s Legal Threat to Perplexity
Amazon has formally warned Perplexity to cease operating its AI‑driven shopping assistant, known as Comet, within the Amazon online store. The e‑commerce giant sent a stern cease‑and‑desist letter after multiple prior warnings that the bot was violating Amazon’s terms of service by not identifying itself as an agent acting on a user’s behalf. Both companies confirmed the exchange publicly, and Amazon’s statement emphasized that third‑party applications that facilitate purchases should operate transparently and respect the decisions of service providers.
Perplexity’s Defense
Perplexity addressed the legal threat in a blog post titled “Bullying is not innovation.” The startup argued that because Comet acts only under direct human instruction, it should inherit the same permissions as the human user and therefore does not need to disclose its agentic status. Perplexity’s position suggests that the bot’s activities are equivalent to a human manually browsing and purchasing items, and that the requirement to identify as an agent is unnecessary.
Amazon’s Counter‑Argument
Amazon refuted Perplexity’s claim by pointing to a range of existing third‑party services—such as food‑delivery apps, ride‑hailing platforms, and online travel agencies—that clearly identify themselves when interacting with merchants. The company asserted that the straightforward solution for Perplexity would be to label Comet as an agent, after which the bot could continue to operate on the platform. Amazon also warned that it retains the right to block any third‑party agentic shopping tools, noting that the company already operates its own shopping bot, Rufus.
Underlying Concerns About Platform Control
Perplexity suggested that Amazon’s potential blocking of Comet may be motivated by a desire to protect its advertising and product‑placement revenue streams, implying that a bot that does not make spontaneous, higher‑priced purchases could undermine Amazon’s commercial interests. The dispute highlights a broader tension between platform owners seeking to control how external bots interact with their services and developers advocating for more open, agentic AI capabilities.
Historical Context: Cloudflare Scraping Allegations
The conflict revives earlier controversy surrounding Perplexity’s web‑scraping behavior. Cloudflare previously published research accusing Perplexity of accessing websites while deliberately ignoring explicit requests to block AI bots. Some observers defended Perplexity, arguing that the AI’s behavior mirrored that of a human user who requests information about a specific site. However, the research indicated that Perplexity employed techniques to conceal its identity when a site opted out of bot traffic, raising questions about its compliance with site policies.
Industry Perspectives
TechCrunch reported on the Cloudflare incident, suggesting that it foreshadowed the challenges that could arise as agentic AI becomes more prevalent in consumer activities such as shopping, travel bookings, and restaurant reservations. The debate centers on whether websites should block bots outright or develop frameworks that allow bots to operate transparently while respecting the preferences of service providers.
Implications for the Future of Agentic AI
The Amazon‑Perplexity dispute serves as a microcosm of the evolving relationship between large platforms and emerging AI agents. As more developers create bots capable of performing tasks traditionally reserved for human users, platforms may need to establish clear policies regarding identification, consent, and permissible actions. The outcome of this particular conflict could set a precedent for how other e‑commerce sites and digital services handle third‑party AI agents moving forward.
Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas