What is new on Article Factory and latest in generative AI world

Ollama Adds Apple MLX Support, Boosts Mac Model Performance

Ollama Adds Apple MLX Support, Boosts Mac Model Performance Ars Technica2
Ollama, a runtime for running large language models locally, announced preview support for Apple’s open‑source MLX framework and added Nvidia’s NVFP4 compression format. The update targets Apple Silicon Macs, requiring at least 32 GB of RAM, and currently supports Alibaba’s 35‑billion‑parameter Qwen 3.5 model. These changes aim to improve caching, memory efficiency, and overall speed, aligning with growing interest in running AI models on personal machines amid frustrations with cloud‑based rate limits and subscription costs. Read more →