Moonshot AI Launches Kimi K2.5 Multimodal Model and Open-Source Coding Tool Kimi Code
New Multimodal Model Kimi K2.5
Moonshot AI unveiled Kimi K2.5, a model that processes text, images, and videos. The company says the model was trained on 15 trillion mixed visual and text tokens, giving it native multimodal capabilities. In benchmark tests, Kimi K2.5 performed on par with leading proprietary models and surpassed them in certain tasks, including coding benchmarks and video‑understanding evaluations.
Open‑Source Coding Tool Kimi Code
To make the model’s coding strengths accessible, Moonshot released Kimi Code, an open‑source coding assistant. Developers can invoke Kimi Code from terminals or integrate it with environments such as VSCode, Cursor, and Zed. The tool accepts images and videos as inputs, allowing users to request code that replicates visual interfaces or functionality shown in media files.
Competitive Landscape
The announcement places Moonshot alongside other AI labs offering specialized coding assistants, such as Anthropic’s Claude Code and Google’s Gemini CLI. Industry reports note that coding tools have become significant revenue drivers for AI companies, with rivals reporting substantial annualized recurring revenue growth.
Funding and Growth
Moonshot AI was founded by former Google and Meta AI researcher Yang Zhilin. The company has secured sizable funding rounds, including a recent Series B raise that valued the firm at several billion dollars. The firm is reportedly preparing for another financing round at an even higher valuation.
Future Outlook
Moonshot’s release of Kimi K2.5 and Kimi Code signals a strategic focus on multimodal AI and developer‑centric tools. The company aims to leverage its multimodal model’s capabilities to differentiate its coding assistant in a crowded market, while continuing to attract investment to support further research and product development.
Used: News Factory APP - news discovery and automation - ChatGPT for Business