Lo nuevo en Article Factory y lo último en el mundo de la IA generativa

Google Appoints Amin Vahdat as Chief Technologist for AI Infrastructure

Google Appoints Amin Vahdat as Chief Technologist for AI Infrastructure
Google has elevated longtime AI infrastructure architect Amin Vahdat to the newly created role of chief technologist for AI infrastructure, reporting directly to CEO Sundar Pichai. The move underscores the importance of AI compute as Alphabet plans to spend up to $93 billion on capital expenditures through 2025. Vahdat, a former professor with a PhD from UC Berkeley, has driven key projects such as the seventh‑generation TPU "Ironwood," the high‑speed Jupiter network, the Borg cluster manager, and the Axion Arm‑based CPUs. His promotion signals Google’s commitment to maintaining a competitive edge in the fast‑evolving AI hardware landscape. Leer más →

Google Explores Satellite Data Centers for AI with Project Suncatcher

Google Explores Satellite Data Centers for AI with Project Suncatcher
Google is researching the concept of placing AI hardware in low‑earth orbit through a project called Suncatcher. The plan envisions solar‑powered satellites carrying Tensor Processing Units (TPUs) to run machine‑learning models using continuous, clean energy. While the idea promises higher power efficiency and reduced carbon emissions, Google acknowledges significant technical hurdles such as radiation exposure, high‑speed inter‑satellite data links, and precise formation flying. Economic analysis suggests comparable power efficiency to Earth‑based data centers by the mid‑2030s, and the company aims to launch prototype satellites by 2027 to test the concept. Leer más →

Google Unveils Project Suncatcher to Deploy AI Chips on Low‑Earth‑Orbit Satellites

Google Unveils Project Suncatcher to Deploy AI Chips on Low‑Earth‑Orbit Satellites
Google announced Project Suncatcher, a moonshot initiative to explore placing its Tensor Processing Units (TPUs) on solar‑powered satellite constellations in low‑Earth orbit. The goal is to scale machine‑learning compute in space by creating swarms of satellites equipped with AI accelerators for tasks such as training, content generation, synthetic speech, vision, and predictive modeling. Google’s senior director Travis Beals highlighted growing AI demand as a driver, while CEO Sundar Pichai noted early tests show TPUs can survive intense radiation, though thermal management and on‑orbit reliability remain challenges. Leer más →

Google's 'Moonshot' Project Suncatcher Aims to Build Space‑Based AI Data Centers

Google's 'Moonshot' Project Suncatcher Aims to Build Space‑Based AI Data Centers
Google has unveiled Project Suncatcher, a research effort to place AI‑focused Tensor Processing Units on solar‑powered satellites, creating data centers in orbit. The company argues that space could offer near‑continuous solar energy, potentially making compute more sustainable. Key hurdles include ultra‑high‑speed inter‑satellite links, tight formation flying, radiation tolerance, and cost competitiveness. Google plans a joint launch with Planet to test prototype hardware by 2027, hoping the approach could become comparable to Earth‑based energy costs by the mid‑2030s. Leer más →

Google's Project Suncatcher Aims to Deploy AI Data Centers in Space

Google's Project Suncatcher Aims to Deploy AI Data Centers in Space
Google is developing Project Suncatcher, a plan to place AI‑focused data centers on a free‑fall satellite constellation. The design calls for tightly spaced satellites—within a kilometer or even several hundred meters—to maintain power links, a formation tighter than any existing constellation but deemed feasible by Google’s models. To keep costs down, Google intends to reuse Earth‑based hardware, testing its durability by exposing its latest Cloud TPU to intense radiation. Prototype satellites could launch by early 2027, with broader deployment targeted for the mid‑2030s when launch costs may fall dramatically, offering a potential solution to the environmental and community challenges of terrestrial data centers. Leer más →

Google Cloud Courts Next‑Generation AI Startups with Open Stack and Credits

Google Cloud Courts Next‑Generation AI Startups with Open Stack and Credits
Google Cloud is focusing on early‑stage AI companies, offering $350,000 in cloud credits, technical assistance, and go‑to‑market support. The firm promotes an open AI stack that spans custom TPUs, foundation models and applications, aiming to win future unicorns before they grow large. Partnerships include TPU deployments with Fluidstack and collaborations with startups such as Loveable and Windsurf, while Google also hosts Anthropic’s Claude and provides TPUs to OpenAI. The strategy reflects Google’s broader commitment to open‑source tools and comes amid regulatory scrutiny of its search dominance. Leer más →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leer más →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leer más →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leer más →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leer más →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leer más →