Back

Ollama Adds Apple MLX Support, Boosts Mac Model Performance

Ollama Expands Local Model Capabilities

Ollama, a runtime system designed to operate large language models on a local computer, has introduced two major enhancements in its latest preview release (Ollama 0.19). First, the platform now supports Apple’s open‑source MLX framework for machine learning, which is tailored for Apple Silicon chips such as the M1 and later models. Second, Ollama has added support for Nvidia’s NVFP4 compression format, a technique that improves memory usage for certain models.

These technical upgrades are positioned to deliver noticeably faster performance on Macs equipped with Apple Silicon. The company notes that the combination of MLX support and NVFP4 compression promises “significantly improved performance” for users who meet the hardware requirements. Specifically, Ollama requires an Apple Silicon‑equipped Mac with at least 32 GB of RAM to run the supported model.

At launch, the preview supports a single model: the 35‑billion‑parameter variant of Alibaba’s Qwen 3.5. While the hardware demands are high by typical consumer standards, the targeted audience includes developers, researchers, and hobbyists who are experimenting with local AI models.

The timing of these enhancements coincides with a surge in interest in running large language models locally. The open‑source project OpenClaw, for example, quickly accumulated over 300,000 stars on GitHub and generated widespread attention, especially in China. Users are increasingly seeking alternatives to cloud‑based services that impose rate limits or require costly subscriptions, such as Claude Code or ChatGPT Codex. By enabling more efficient local execution, Ollama aims to address these pain points.

In addition to the MLX integration, Ollama recently expanded its Visual Studio Code integration, further streamlining the workflow for developers who wish to incorporate local AI models into their coding environment.

Overall, Ollama’s latest preview release positions the platform as a more viable option for users who want high‑performance AI capabilities without relying on external cloud services. The focus on Apple Silicon, combined with memory‑saving compression techniques, reflects a broader industry trend toward on‑device AI processing.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Ars Technica2

Also available in: