What is new on Article Factory and latest in generative AI world

Chinese Photonic AI Chips Claim Massive Speed Gains Over Nvidia GPUs

Chinese Photonic AI Chips Claim Massive Speed Gains Over Nvidia GPUs
Researchers in China have unveiled photonic AI chips that reportedly outperform conventional Nvidia GPUs by up to 100 times on narrowly defined generative tasks. The hybrid ACCEL system combines optical and analog electronic components, while the all‑optical LightGen chip contains more than two million photonic neurons. Both platforms claim dramatic improvements in speed and energy efficiency for image‑related workloads, though they are targeted at specialized applications rather than general‑purpose computing. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

Nvidia AI Accelerator Sales Projected Near $400 Billion by 2028 Amid Expanding Hyperscaler Spending

Nvidia AI Accelerator Sales Projected Near $400 Billion by 2028 Amid Expanding Hyperscaler Spending
Morningstar Equity Research forecasts Nvidia's AI‑related sales could approach $400 billion by 2028, driven largely by its accelerator products. The analysis highlights that hyperscaler cloud providers are expected to push annual capital expenditures beyond $450 billion by 2027, fueling demand for AI hardware. While growth remains robust, the pace is projected to decelerate after 2024, raising questions about sustainability. Competitive pressures from Broadcom, AMD and other semiconductor firms, together with rising energy needs, regulatory scrutiny and geopolitical factors, could temper Nvidia's dominance in the long run. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

Nvidia AI Accelerator Sales Projected Near $400 Billion by 2028 Amid Expanding Hyperscaler Spending

Nvidia AI Accelerator Sales Projected Near $400 Billion by 2028 Amid Expanding Hyperscaler Spending
Morningstar Equity Research forecasts Nvidia's AI‑related sales could approach $400 billion by 2028, driven largely by its accelerator products. The analysis highlights that hyperscaler cloud providers are expected to push annual capital expenditures beyond $450 billion by 2027, fueling demand for AI hardware. While growth remains robust, the pace is projected to decelerate after 2024, raising questions about sustainability. Competitive pressures from Broadcom, AMD and other semiconductor firms, together with rising energy needs, regulatory scrutiny and geopolitical factors, could temper Nvidia's dominance in the long run. Leia mais →

Nvidia AI Accelerator Sales Projected Near $400 Billion by 2028 Amid Expanding Hyperscaler Spending

Nvidia AI Accelerator Sales Projected Near $400 Billion by 2028 Amid Expanding Hyperscaler Spending
Morningstar Equity Research forecasts Nvidia's AI‑related sales could approach $400 billion by 2028, driven largely by its accelerator products. The analysis highlights that hyperscaler cloud providers are expected to push annual capital expenditures beyond $450 billion by 2027, fueling demand for AI hardware. While growth remains robust, the pace is projected to decelerate after 2024, raising questions about sustainability. Competitive pressures from Broadcom, AMD and other semiconductor firms, together with rising energy needs, regulatory scrutiny and geopolitical factors, could temper Nvidia's dominance in the long run. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance

AMD Unveils MI355X DLC Rack Featuring 128 GPUs and 2.6 Exaflops FP4 Performance
AMD used the Hot Chips event to detail its Instinct MI350 family and the flagship MI355X DLC rack. The two‑U system houses 128 GPUs, 36 TB of HBM3e memory, and delivers up to 2.6 exaflops of FP4 precision performance. Flexible node designs support both air‑ and liquid‑cooling, with an 8‑GPU configuration reaching 73.8 petaflops at FP8. AMD also referenced its roadmap, noting the MI400 slated for 2026 with HBM4 and higher throughput, while briefly comparing Nvidia’s upcoming Vera Rubin systems. Leia mais →

SanDisk and SK Hynix Partner to Introduce High Bandwidth Flash for AI Workloads

SanDisk and SK Hynix Partner to Introduce High Bandwidth Flash for AI Workloads
SanDisk and SK Hynix have announced a collaboration to develop High Bandwidth Flash (HBF), a NAND‑based memory stack designed to replace part of traditional high‑bandwidth memory (HBM) in AI accelerators. The technology promises 8‑16 times the storage capacity of DRAM‑based HBM while maintaining comparable cost and offering non‑volatility‑driven energy savings. A prototype was shown at the Flash Memory Summit 2025, with sample modules slated for the second half of 2026 and the first AI hardware using HBF expected in early 2027. The partnership could help data‑center operators address thermal and budget pressures as AI models grow larger. Leia mais →