Back

Perplexity Launches “Computer” AI Agent Platform with Cloud‑Based, Curated Integrations

Overview

Perplexity announced a new AI offering called Computer, positioned as a platform that can delegate work to other AI agents. The service is built to operate primarily in the cloud, keeping the core processing away from the user’s local machine. By placing the system inside a curated “walled garden,” Perplexity seeks to provide a more controlled environment compared with unregulated AI agent tools.

Functionality and Permissions

Computer is designed to act on user‑provided context files—examples include USER.MD, MEMORY.MD, SOUL.MD, and HEARTBEAT.MD. With appropriate permissions and selected plugins, the agent can create, modify, or delete the user’s files, extending its influence far beyond typical language‑model interactions. This capability enables the agent to perform extended knowledge‑work tasks without constant user supervision, allowing it to run for long periods on its own.

Security Measures

To address concerns about the open‑ended power of similar tools, Perplexity outlines two main safeguards. First, the core processing occurs in the cloud rather than on the user’s device, reducing local exposure. Second, the platform limits integrations to a curated list of vetted plugins, contrasting with the “wild west” of unverified extensions seen elsewhere. Perplexity likens this approach to an app‑store model, where users gain functionality without trusting unknown packages that could access their system.

Comparisons and Market Position

Computer is positioned as a more restrained alternative to OpenClaw, which has been described as an open web of AI agent tools. The analogy drawn by Perplexity suggests that while OpenClaw resembles an unrestricted frontier, Computer resembles a controlled marketplace akin to a major tech company’s app ecosystem. The platform also aims to compete with offerings such as Claude Cowork by optimizing subtasks and selecting the most suitable models for each component of a larger workflow.

Potential Risks

Despite the safeguards, Perplexity acknowledges that large‑language‑model errors can still have consequential effects, especially if the agent works with data that is not backed up elsewhere. Past incidents involving similar toolkits—such as a case where a user’s emails were deleted against her will—highlight the importance of verification and careful oversight. Perplexity notes that while Computer strives to “button up, refine, and contain” the power of agentic AI, the risk of mistakes and security vulnerabilities remains a consideration for users.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Ars Technica2

Also available in: