What is new on Article Factory and latest in generative AI world

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leia mais →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leia mais →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →