One of the core challenges that Rapidus will face when it kicks off volume production of chips on its 2nm-class process technology in 2027 is lining up customers. With Intel, Samsung, and TSMC all slated to offer their own 2nm-class nodes by that time, Rapidus will need some kind of advantage to attract customers away from its more established rivals. To that end, the company thinks they've found their edge: fully automated packaging that will allow for shorter chip lead times than manned packaging operations.
In an interview with Nikkei, Rapidus' president, Atsuyoshi Koike, outlined the company's vision to use advanced packaging as a competitive edge for the new fab. The Hokkaido facility, which is currently under construction and is expecting to begin equipment installation this December, is already slated to both produce chips and offer advanced packaging services within the same facility, an industry first. But ultimately, Rapidus biggest plan to differentiate itself is by automating the back-end fab processes (chip packaging) to provide significantly faster turnaround times.
Rapidus is targetting back-end production in particular as, compared to front-end (lithography) production, back-end production still heavily relies on human labor. No other advanced packaging fab has fully automated the process thus far, which provides for a degree of flexibility, but slows throughput. But with automation in place to handle this aspect of chip production, Rapidus would be able to increase chip packaging efficiency and speed, which is crucial as chip assembly tasks become more complex. Rapidus is also collaborating with multiple Japanese suppliers to source materials for back-end production.
"In the past, Japanese chipmakers tried to keep their technology development exclusively in-house, which pushed up development costs and made them less competitive," Koike told Nikkei. "[Rapidus plans to] open up technology that should be standardized, bringing down costs, while handling important technology in-house."
Financially, Rapidus faces a significant challenge, needing a total of ¥5 trillion ($35 billion) by the time mass production starts in 2027. The company estimates that ¥2 trillion will be required by 2025 for prototype production. While the Japanese government has provided ¥920 billion in aid, Rapidus still needs to secure substantial funding from private investors.
Due to its lack of track record and experience of chip production as. well as limited visibility for success, Rapidus is finding it difficult to attract private financing. The company is in discussions with the government to make it easier to raise capital, including potential loan guarantees, and is hopeful that new legislation will assist in this effort.
SemiconductorsKioxia Demonstrates Optical Interface SSDs for Data Centers A few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display. For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available. The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links. Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem. Storage
End of the Road: An AnandTech Farewell It is with great sadness that I find myself penning the hardest news post I’ve ever needed to write here at AnandTech. After over 27 years of covering the wide – and wild – world of computing hardware, today is AnandTech’s final day of publication. For better or worse, we’ve reached the end of a long journey – one that started with a review of an AMD processor, and has ended with the review of an AMD processor. It’s fittingly poetic, but it is also a testament to the fact that we’ve spent the last 27 years doing what we love, covering the chips that are the lifeblood of the computing industry. A lot of things have changed in the last quarter-century – in 1997 NVIDIA had yet to even coin the term “GPU” – and we’ve been fortunate to watch the world of hardware continue to evolve over the time period. We’ve gone from boxy desktop computers and laptops that today we’d charitably classify as portable desktops, to pocket computers where even the cheapest budget device puts the fastest PC of 1997 to shame. The years have also brought some monumental changes to the world of publishing. AnandTech was hardly the first hardware enthusiast website, nor will we be the last. But we were fortunate to thrive in the past couple of decades, when so many of our peers did not, thanks to a combination of hard work, strategic investments in people and products, even more hard work, and the support of our many friends, colleagues, and readers. Still, few things last forever, and the market for written tech journalism is not what it once was – nor will it ever be again. So, the time has come for AnandTech to wrap up its work, and let the next generation of tech journalists take their place within the zeitgeist. It has been my immense privilege to write for AnandTech for the past 19 years – and to manage it as its editor-in-chief for the past decade. And while I carry more than a bit of remorse in being AnandTech’s final boss, I can at least take pride in everything we’ve accomplished over the years, whether it’s lauding some legendary products, writing technology primers that still remain relevant today, or watching new stars rise in expected places. There is still more that I had wanted AnandTech to do, but after 21,500 articles, this was a good start. And while the AnandTech staff is riding off into the sunset, I am happy to report that the site itself won’t be going anywhere for a while. Our publisher, Future PLC, will be keeping the AnandTech website and its many articles live indefinitely. So that all of the content we’ve created over the years remains accessible and citable. Even without new articles to add to the collection, I expect that many of the things we’ve written over the past couple of decades will remain relevant for years to come – and remain accessible just as long. The AnandTech Forums will also continue to be operated by Future’s community team and our dedicated troop of moderators. With forum threads going back to 1999 (and some active members just as long), the forums have a history almost as long and as storied as AnandTech itself (wounded monitor children, anyone?). So even when AnandTech is no longer publishing articles, we’ll still have a place for everyone to talk about the latest in technology – and have those discussions last longer than 48 hours. Finally, for everyone who still needs their technical writing fix, our formidable opposition of the last 27 years and fellow Future brand, Tom’s Hardware, is continuing to cover the world of technology. There are a couple of familiar AnandTech faces already over there providing their accumulated expertise, and the site will continue doing its best to provide a written take on technology news.
Later this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
The CXL consortium has had a regular presence at FMS (which rechristened itself from 'Flash Memory Summit' to the 'Future of Memory and Storage' this year). Back at FMS 2022, the company had announced v3.0 of the CXL specifications. This was followed by CXL 3.1's introduction at Supercomputing 2023. Having started off as a host to device interconnect standard, it had slowly subsumed other competing standards such as OpenCAPI and Gen-Z. As a result, the specifications started to encompass a wide variety of use-cases by building a protocol on top of the the ubiquitous PCIe expansion bus. The CXL consortium comprises of heavyweights such as AMD and Intel, as well as a large number of startup companies attempting to play in different segments on the device side. At FMS 2024, CXL had a prime position in the booth demos of many vendors.
The migration of server platforms from DDR4 to DDR5, along with the rise of workloads demanding large RAM capacity (but not particularly sensitive to either memory bandwidth or latency), has opened up memory expansion modules as one of the first set of widely available CXL devices. Over the last couple of years, we have had product announcements from Samsung and Micron in this area.
At FMS 2024, SK hynix was showing off their DDR5-based CMM-DDR5 CXL memory module with a 128 GB capacity. The company was also detailing their associated Heterogeneous Memory Software Development Kit (HMSDK) - a set of libraries and tools at both the kernel and user levels aimed at increasing the ease of use of CXL memory. This is achieved in part by considering the memory pyramid / hierarchy and relocating the data between the server's main memory (DRAM) and the CXL device based on usage frequency.
The CMM-DDR5 CXL memory module comes in the SDFF form-factor (E3.S 2T) with a PCIe 3.0 x8 host interface. The internal memory is based on 1α technology DRAM, and the device promises DDR5-class bandwidth and latency within a single NUMA hop. As these memory modules are meant to be used in datacenters and enterprises, the firmware includes features for RAS (reliability, availability, and serviceability) along with secure boot and other management features.
SK hynix was also demonstrating Niagara 2.0 - a hardware solution (currently based on FPGAs) to enable memory pooling and sharing - i.e, connecting multiple CXL memories to allow different hosts (CPUs and GPUs) to optimally share their capacity. The previous version only allowed capacity sharing, but the latest version enables sharing of data also. SK hynix had presented these solutions at the CXL DevCon 2024 earlier this year, but some progress seems to have been made in finalizing the specifications of the CMM-DDR5 at FMS 2024.
Micron had unveiled the CZ120 CXL Memory Expansion Module last year based on the Microchip SMC 2000 series CXL memory controller. At FMS 2024, Micron and Microchip had a demonstration of the module on a Granite Rapids server.
Additional insights into the SMC 2000 controller were also provided.
The CXL memory controller also incorporates DRAM die failure handling, and Microchip also provides diagnostics and debug tools to analyze failed modules. The memory controller also supports ECC, which forms part of the enterprise... Storage
0 Comments