When Micron announced plans to build two new fabs in the U.S. in 2022, the company vaguely said both would come online by the decade's end. Then, in 2023, it began to optimize its spending, which pushed production at these fabrication facilities. This week, the company outlined more precise timeframes for when its fabs in Idaho and New York will start operations: this will happen from calendar 2026 to calendar 2029.
"These fab construction investments are necessary to support supply growth for the latter half of this decade," a statement by Micron in its Q3 FY2024 financial results report reads. "This Idaho fab will not contribute to meaningful bit supply until fiscal 2027 and the New York construction capex is not expected to contribute to bit supply growth until fiscal 2028 or later. The timing of future [wafer fab equipment] spend in these fabs will be managed to align supply growth with expected demand growth."
Micron's fiscal year 2027 starts in September 2026, so the new fab near Boise, Idaho, is set to start operations between September 2026 and September 2027. The company's fiscal 2028 starts in September 2027, so the fab will likely begin operations in calendar 2028 or later, probably depending on the demand for DRAM memory in the coming years. That said, Micron's U.S. memory fabs will begin operations between late 2026 and 2029, which aligns with the company's original plans.
Construction of the fab in Idaho is well underway. In contrast, construction of the New York facility has yet to begin as the company is working on regulatory and permitting processes in the state.
Micron's capital expenditure (CaPex) plan for FY2024 is approximately $8.0 billion, with a decrease in year-over-year spending on wafer fabrication equipment (WFE). In Q4 FY2024, the company will spend around $3 billion on fab construction, new wafer fab tools, and various expansions/upgrades.
Looking ahead to FY2025, the company plans a substantial increase in capex, targeting a mid-30s percentage of revenue to support various technological and facility advancements. In particular, it expects its quarterly CapEx to average above the $3 billion level seen in the fourth quarter of FY2024, which means that it plans to spend about $12 billion in its fiscal 2025, which begins in late September.
Half or more of the total CapEx increase in FY2025 (i.e., over $2 billion) will be allocated to constructing new fabs in Idaho and New York. Meanwhile, the FY2025 CapEx will significantly rise to fund high-bandwidth memory (HBM) assembly and testing and the construction of fabrication and back-end facilities. This increase also includes investments in technology transitions to meet growing demand.
"Fab construction in Idaho is underway, and we are working diligently to complete the regulatory and permitting processes in New York," said Sanjay Mehrotra, chief executive officer of Micron, at the company's conference call with investors and financial analysts (via SeekingAlpha). "This additional leading-edge greenfield capacity, along with continued technology transition investments in our Asia facilities, is required to meet long-term demand in the second half of this decade and beyond. These investments support our objective to maintain our current bit share over time and to grow our memory bit supply in line with long-term industry bit demand."
MemoryLater this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
The CXL consortium has had a regular presence at FMS (which rechristened itself from 'Flash Memory Summit' to the 'Future of Memory and Storage' this year). Back at FMS 2022, the company had announced v3.0 of the CXL specifications. This was followed by CXL 3.1's introduction at Supercomputing 2023. Having started off as a host to device interconnect standard, it had slowly subsumed other competing standards such as OpenCAPI and Gen-Z. As a result, the specifications started to encompass a wide variety of use-cases by building a protocol on top of the the ubiquitous PCIe expansion bus. The CXL consortium comprises of heavyweights such as AMD and Intel, as well as a large number of startup companies attempting to play in different segments on the device side. At FMS 2024, CXL had a prime position in the booth demos of many vendors.
The migration of server platforms from DDR4 to DDR5, along with the rise of workloads demanding large RAM capacity (but not particularly sensitive to either memory bandwidth or latency), has opened up memory expansion modules as one of the first set of widely available CXL devices. Over the last couple of years, we have had product announcements from Samsung and Micron in this area.
At FMS 2024, SK hynix was showing off their DDR5-based CMM-DDR5 CXL memory module with a 128 GB capacity. The company was also detailing their associated Heterogeneous Memory Software Development Kit (HMSDK) - a set of libraries and tools at both the kernel and user levels aimed at increasing the ease of use of CXL memory. This is achieved in part by considering the memory pyramid / hierarchy and relocating the data between the server's main memory (DRAM) and the CXL device based on usage frequency.
The CMM-DDR5 CXL memory module comes in the SDFF form-factor (E3.S 2T) with a PCIe 3.0 x8 host interface. The internal memory is based on 1α technology DRAM, and the device promises DDR5-class bandwidth and latency within a single NUMA hop. As these memory modules are meant to be used in datacenters and enterprises, the firmware includes features for RAS (reliability, availability, and serviceability) along with secure boot and other management features.
SK hynix was also demonstrating Niagara 2.0 - a hardware solution (currently based on FPGAs) to enable memory pooling and sharing - i.e, connecting multiple CXL memories to allow different hosts (CPUs and GPUs) to optimally share their capacity. The previous version only allowed capacity sharing, but the latest version enables sharing of data also. SK hynix had presented these solutions at the CXL DevCon 2024 earlier this year, but some progress seems to have been made in finalizing the specifications of the CMM-DDR5 at FMS 2024.
Micron had unveiled the CZ120 CXL Memory Expansion Module last year based on the Microchip SMC 2000 series CXL memory controller. At FMS 2024, Micron and Microchip had a demonstration of the module on a Granite Rapids server.
Additional insights into the SMC 2000 controller were also provided.
The CXL memory controller also incorporates DRAM die failure handling, and Microchip also provides diagnostics and debug tools to analyze failed modules. The memory controller also supports ECC, which forms part of the enterprise... Storage
0 Comments