What is new on Article Factory and latest in generative AI world

Google Appoints Amin Vahdat as Chief Technologist for AI Infrastructure

Google Appoints Amin Vahdat as Chief Technologist for AI Infrastructure
Google has elevated longtime AI infrastructure architect Amin Vahdat to the newly created role of chief technologist for AI infrastructure, reporting directly to CEO Sundar Pichai. The move underscores the importance of AI compute as Alphabet plans to spend up to $93 billion on capital expenditures through 2025. Vahdat, a former professor with a PhD from UC Berkeley, has driven key projects such as the seventh‑generation TPU "Ironwood," the high‑speed Jupiter network, the Borg cluster manager, and the Axion Arm‑based CPUs. His promotion signals Google’s commitment to maintaining a competitive edge in the fast‑evolving AI hardware landscape. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →