Atrás

OpenAI Launches Codex‑Spark, a Fast, Lightweight Coding Assistant Powered by Cerebras Chip

OpenAI Launches Codex‑Spark, a Fast, Lightweight Coding Assistant Powered by Cerebras Chip
TechCrunch

OpenAI Introduces Codex‑Spark

OpenAI announced a new lightweight version of its Codex coding tool, branded as GPT‑5.3‑Codex‑Spark. The company describes Spark as a "daily productivity driver" that enables rapid prototyping and real‑time collaboration, contrasting it with the original Codex model that is geared toward longer, more complex tasks.

Hardware Partnership with Cerebras

The performance boost behind Spark comes from a dedicated chip supplied by Cerebras, a long‑standing hardware partner of OpenAI. Spark runs on Cerebras' Wafer Scale Engine 3 (WSE‑3), the third‑generation waferscale megachip equipped with four trillion transistors. OpenAI highlighted that this integration is the "first milestone" of a multi‑year agreement between the two firms, a partnership previously announced as being worth over $10 billion.

Design Goals and Capabilities

OpenAI emphasizes that Spark is built for the lowest possible latency, targeting workflows that demand extremely fast response times. The model is intended to support two complementary modes: a real‑time collaboration mode for rapid iteration and a longer‑running mode for deeper reasoning and execution. According to the company, the new chip excels at handling tasks that require minimal delay, enabling new interaction patterns and use cases.

Availability and Preview

Codex‑Spark is currently available as a research preview to ChatGPT Pro users within the Codex app. The launch was hinted at in a tweet from OpenAI CEO Sam Altman, who mentioned a "special thing launching to Codex users on the Pro plan" and noted that it "sparks joy for me."

Industry Context

Cerebras has been a notable player in AI hardware for over a decade, and its recent activities include raising $1 billion in fresh capital at a valuation of $23 billion, as well as discussions about an initial public offering. The collaboration with OpenAI positions Cerebras as a key supplier for next‑generation AI inference workloads.

Outlook

OpenAI and Cerebras both view Spark as the beginning of a broader exploration into fast‑inference capabilities. Cerebras’ CTO and co‑founder Sean Lie remarked that partnering with OpenAI and the developer community will uncover what fast inference can enable, from new interaction patterns to fundamentally different model experiences. The preview phase suggests that additional features and broader availability may follow as the partnership evolves.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: TechCrunch

También disponible en: