What is new on Article Factory and latest in generative AI world

Amazon in Talks to Invest $50 B in OpenAI

Amazon in Talks to Invest $50 B in OpenAI
Amazon is reportedly negotiating a major investment of at least $50 billion in OpenAI, which is seeking $100 billion in new funding that could lift its valuation to $830 billion. Amazon CEO Andy Jassy is leading talks with OpenAI CEO Sam Altman, while OpenAI also explores capital from sovereign wealth funds and tech giants. The deal is expected to close by the end of the first quarter, highlighting Amazon’s deep ties to the AI sector through its AWS partnership with Anthropic and a new $11 billion data‑center campus in Indiana. Read more →

Nvidia and Deutsche Telekom Launch €1 B AI Data Center in Munich

Nvidia and Deutsche Telekom Launch €1 B AI Data Center in Munich
Nvidia has signed a €1 billion partnership with Deutsche Telekom to create an AI factory in Munich. The project, called the Industrial AI Cloud, will deploy over 1,000 Nvidia DGX B200 systems and RTX Pro servers equipped with up to 10,000 Blackwell GPUs to deliver AI inferencing and other services to German companies while respecting data‑sovereignty rules. Early partners include Agile Robots, which will install server racks, and Perplexity, which will use the facility for in‑country AI inferencing. SAP will supply its Business Technology platform, and the center is slated to begin operations in early 2026. Read more →

Microsoft signs $9.7 B five‑year AI cloud capacity deal with Australia’s IREN

Microsoft signs $9.7 B five‑year AI cloud capacity deal with Australia’s IREN
Microsoft has entered a five‑year, $9.7 billion agreement with Australian data‑center operator IREN to secure additional AI cloud capacity. The partnership will tap IREN’s Nvidia GB300 GPU infrastructure, slated for deployment at a Texas facility capable of delivering 750 megawatts of power. IREN is also investing in Dell‑supplied equipment worth about $5.8 billion. The deal positions Microsoft to meet rising demand for AI services on Azure and reflects a broader shift of compute assets from cryptocurrency mining to artificial‑intelligence workloads. Read more →

Huawei Ascend 950, Nvidia H200, and AMD MI300 Instinct: Head‑to‑Head AI Chip Comparison

Huawei Ascend 950, Nvidia H200, and AMD MI300 Instinct: Head‑to‑Head AI Chip Comparison
A side‑by‑side look at three leading AI accelerators—Huawei's Ascend 950 series, Nvidia's H200 (GH100 Hopper), and AMD's Radeon Instinct MI300 (Aqua Vanjaram). The comparison covers architecture, process technology, transistor counts, die size, memory type and capacity, bandwidth, compute performance across FP8, FP16, FP32 and FP64, and target scenarios such as large‑scale LLM training, inference, and high‑performance computing. Availability timelines differ, with each vendor positioning its chip for data‑center and HPC workloads. Read more →

Nvidia’s Data‑Center Sales Lean Heavily on Three Unnamed Customers

Nvidia’s Data‑Center Sales Lean Heavily on Three Unnamed Customers
Nvidia’s latest earnings reveal that more than half of its data‑center revenue comes from just three undisclosed clients. The company reported roughly 53% of data‑center sales tied to these customers, amounting to billions of dollars. Industry observers speculate the trio could include Elon Musk’s xAI, an OpenAI‑Oracle partnership, and Meta, though Nvidia has not confirmed any identities. Analysts warn that such concentration creates a structural vulnerability: a shift by any of the three could leave a sizable gap in Nvidia’s financials. Geopolitical factors and recent chip restrictions add further uncertainty to the outlook. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

NVIDIA Invests $5 Billion in Intel to Create Custom PC and Data‑Center CPUs

NVIDIA Invests $5 Billion in Intel to Create Custom PC and Data‑Center CPUs
NVIDIA announced a $5 billion investment in Intel as part of a new collaboration to jointly develop custom x86 CPUs for PCs and data‑center servers. The partnership will blend NVIDIA’s leading GPU and AI chips with Intel’s processor expertise, manufacturing, and advanced packaging. Intel will build “NVIDIA‑custom x86 CPUs” that integrate NVIDIA’s RTX GPU chiplets, while both CEOs highlighted the strategic value of combining their technologies. The deal marks a rare alliance between the two rivals and could reshape the landscape for high‑performance computing. Read more →

Nvidia and Intel Unveil Strategic Partnership to Embed RTX GPU Chiplets in Intel Processors

Nvidia and Intel Unveil Strategic Partnership to Embed RTX GPU Chiplets in Intel Processors
Nvidia and Intel announced a major partnership that will integrate Nvidia's RTX GPU chiplets into Intel system‑on‑chips for data‑center and client CPUs. The deal, described as a multi‑billion‑dollar collaboration, positions Nvidia as a key customer of Intel and aims to boost Intel's integrated graphics capabilities while giving Nvidia a foothold in the x86 market. Executives from both companies highlighted the focus on datacenter and client CPU integration, and industry observers noted the potential impact on competitors such as AMD. The announcement also referenced recent U.S. government involvement in Intel's ownership structure. Read more →

Nvidia invests $5 billion in Intel to co‑develop custom PC and data‑center chips

Nvidia invests $5 billion in Intel to co‑develop custom PC and data‑center chips
Nvidia is putting $5 billion into Intel common stock as part of a new partnership that will see the two companies jointly create multiple generations of custom system‑on‑chip products for both PCs and data‑center workloads. Intel will build x86 SoCs that integrate Nvidia RTX GPU chiplets, while the firms will also combine their interconnect technologies such as Nvidia’s NVLink. The collaboration is intended to strengthen both companies against rivals like AMD and to broaden Nvidia’s AI and accelerated‑computing reach. The announcement coincided with a U.S. government stake in Intel and a SoftBank investment, sending Intel’s shares sharply higher. Read more →

Nvidia Invests $5 B in Intel, Launches Joint AI Chip Collaboration

Nvidia Invests $5 B in Intel, Launches Joint AI Chip Collaboration
Nvidia announced a $5 billion equity investment in Intel, pairing the cash injection with a product collaboration that links Nvidia's AI and accelerated‑computing technology to Intel's leading CPUs via NVLink. The partnership aims to create integrated systems‑on‑chip for data‑center and personal‑device markets, including new laptop designs that fuse GPUs and CPUs. The deal arrives as the U.S. government has taken a roughly 10 percent stake in Intel under the CHIPS Act, and it underscores a broader push to strengthen American semiconductor capabilities. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory

Google Unveils Ironwood TPU with Record 1.77PB Shared Memory
Google introduced its seventh‑generation Tensor Processing Unit, dubbed Ironwood, at a recent Hot Chips event. The dual‑die chip delivers 4,614 TFLOPs of FP8 performance and pairs each die with eight stacks of HBM3e, providing 192 GB of memory per chip. When scaled to a 9,216‑chip pod, the system reaches 1.77 PB of directly addressable memory—the largest shared‑memory configuration ever recorded for a supercomputer. The architecture includes advanced reliability features, liquid‑cooling infrastructure, and AI‑assisted design optimizations, and is already being deployed in Google Cloud data centers for large‑scale inference workloads. Read more →

Nvidia’s Data‑Center Sales Lean Heavily on Three Unnamed Customers

Nvidia’s Data‑Center Sales Lean Heavily on Three Unnamed Customers
Nvidia’s latest earnings reveal that more than half of its data‑center revenue comes from just three undisclosed clients. The company reported roughly 53% of data‑center sales tied to these customers, amounting to billions of dollars. Industry observers speculate the trio could include Elon Musk’s xAI, an OpenAI‑Oracle partnership, and Meta, though Nvidia has not confirmed any identities. Analysts warn that such concentration creates a structural vulnerability: a shift by any of the three could leave a sizable gap in Nvidia’s financials. Geopolitical factors and recent chip restrictions add further uncertainty to the outlook. Read more →

Nvidia’s Data‑Center Sales Lean Heavily on Three Unnamed Customers

Nvidia’s Data‑Center Sales Lean Heavily on Three Unnamed Customers
Nvidia’s latest earnings reveal that more than half of its data‑center revenue comes from just three undisclosed clients. The company reported roughly 53% of data‑center sales tied to these customers, amounting to billions of dollars. Industry observers speculate the trio could include Elon Musk’s xAI, an OpenAI‑Oracle partnership, and Meta, though Nvidia has not confirmed any identities. Analysts warn that such concentration creates a structural vulnerability: a shift by any of the three could leave a sizable gap in Nvidia’s financials. Geopolitical factors and recent chip restrictions add further uncertainty to the outlook. Read more →

NVIDIA Invests $5 Billion in Intel to Create Custom PC and Data‑Center CPUs

NVIDIA Invests $5 Billion in Intel to Create Custom PC and Data‑Center CPUs
NVIDIA announced a $5 billion investment in Intel as part of a new collaboration to jointly develop custom x86 CPUs for PCs and data‑center servers. The partnership will blend NVIDIA’s leading GPU and AI chips with Intel’s processor expertise, manufacturing, and advanced packaging. Intel will build “NVIDIA‑custom x86 CPUs” that integrate NVIDIA’s RTX GPU chiplets, while both CEOs highlighted the strategic value of combining their technologies. The deal marks a rare alliance between the two rivals and could reshape the landscape for high‑performance computing. Read more →

NVIDIA Invests $5 Billion in Intel to Create Custom PC and Data‑Center CPUs

NVIDIA Invests $5 Billion in Intel to Create Custom PC and Data‑Center CPUs
NVIDIA announced a $5 billion investment in Intel as part of a new collaboration to jointly develop custom x86 CPUs for PCs and data‑center servers. The partnership will blend NVIDIA’s leading GPU and AI chips with Intel’s processor expertise, manufacturing, and advanced packaging. Intel will build “NVIDIA‑custom x86 CPUs” that integrate NVIDIA’s RTX GPU chiplets, while both CEOs highlighted the strategic value of combining their technologies. The deal marks a rare alliance between the two rivals and could reshape the landscape for high‑performance computing. Read more →

Nvidia and Intel Unveil Strategic Partnership to Embed RTX GPU Chiplets in Intel Processors

Nvidia and Intel Unveil Strategic Partnership to Embed RTX GPU Chiplets in Intel Processors
Nvidia and Intel announced a major partnership that will integrate Nvidia's RTX GPU chiplets into Intel system‑on‑chips for data‑center and client CPUs. The deal, described as a multi‑billion‑dollar collaboration, positions Nvidia as a key customer of Intel and aims to boost Intel's integrated graphics capabilities while giving Nvidia a foothold in the x86 market. Executives from both companies highlighted the focus on datacenter and client CPU integration, and industry observers noted the potential impact on competitors such as AMD. The announcement also referenced recent U.S. government involvement in Intel's ownership structure. Read more →

Nvidia and Intel Unveil Strategic Partnership to Embed RTX GPU Chiplets in Intel Processors

Nvidia and Intel Unveil Strategic Partnership to Embed RTX GPU Chiplets in Intel Processors
Nvidia and Intel announced a major partnership that will integrate Nvidia's RTX GPU chiplets into Intel system‑on‑chips for data‑center and client CPUs. The deal, described as a multi‑billion‑dollar collaboration, positions Nvidia as a key customer of Intel and aims to boost Intel's integrated graphics capabilities while giving Nvidia a foothold in the x86 market. Executives from both companies highlighted the focus on datacenter and client CPU integration, and industry observers noted the potential impact on competitors such as AMD. The announcement also referenced recent U.S. government involvement in Intel's ownership structure. Read more →

Nvidia invests $5 billion in Intel to co‑develop custom PC and data‑center chips

Nvidia invests $5 billion in Intel to co‑develop custom PC and data‑center chips
Nvidia is putting $5 billion into Intel common stock as part of a new partnership that will see the two companies jointly create multiple generations of custom system‑on‑chip products for both PCs and data‑center workloads. Intel will build x86 SoCs that integrate Nvidia RTX GPU chiplets, while the firms will also combine their interconnect technologies such as Nvidia’s NVLink. The collaboration is intended to strengthen both companies against rivals like AMD and to broaden Nvidia’s AI and accelerated‑computing reach. The announcement coincided with a U.S. government stake in Intel and a SoftBank investment, sending Intel’s shares sharply higher. Read more →

Nvidia invests $5 billion in Intel to co‑develop custom PC and data‑center chips

Nvidia invests $5 billion in Intel to co‑develop custom PC and data‑center chips
Nvidia is putting $5 billion into Intel common stock as part of a new partnership that will see the two companies jointly create multiple generations of custom system‑on‑chip products for both PCs and data‑center workloads. Intel will build x86 SoCs that integrate Nvidia RTX GPU chiplets, while the firms will also combine their interconnect technologies such as Nvidia’s NVLink. The collaboration is intended to strengthen both companies against rivals like AMD and to broaden Nvidia’s AI and accelerated‑computing reach. The announcement coincided with a U.S. government stake in Intel and a SoftBank investment, sending Intel’s shares sharply higher. Read more →