data:post.title Comp Buddy

Hot Posts

6/recent/ticker-posts

Recent posts

Show more
PCI-SIG Demonstrates PCIe 6.0 Interoperability at FMS 2024 <p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img src="https://images.anandtech.com/doci/21531/pci-sig-carousel_575px.jpg" alt="" /></a></p><p><p>As the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pci-sig-roadmap_575px.jpg" /></a></p>

<p>PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pcie-char_575px.jpg" /></a></p>

<p>The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.</p>

<p>The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pcie-cadence_575px.jpg" /></a></p>

<p>We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pcie-cabling_575px.jpg" /></a></p>

<p>The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.</p>

<p>OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.</p>
</p> Storage
Microchip Demonstrates Flashtec 5016 Enterprise SSD Controller <p align="center"><a href="https://www.anandtech.com/show/21514/microship-demonstrates-flashtec-5016-enterprise-ssd-controller"><img src="https://images.anandtech.com/doci/21514/carousel_575px.jpg" alt="" /></a></p><p><p>Microchip recently announced the availability of their second PCIe Gen 5 enterprise SSD controller - the Flashtec 5016. Like the 4016, this is also a 16-channel controller, but there are some key updates:</p>

<ul>
 <li>PCIe 5.0 lane organization: Operation in x4 or dual independent x2 / x2 mode in the 5016, compared to the x8, or x4, or dual independent x4 / x2 mode in the 4016.</li>
 <li>DRAM support: Four ranks of DDR5-5200 in the 5016, compared to two ranks of DDR4-3200 in the 4016.</li>
 <li>Extended NAND support: 2400 MT/s NAND in the 4016, compared to the 3200 MT/s NAND support in the 5016.</li>
 <li>Performance improvements: The 5016 is capable of delivering 3.5M+ random read IOPS compared to the 3M+ of the 4016.</li>
</ul>

<p>Microchip's enterprise SSD controllers provide a high level of flexibility to SSD vendors by providing them with significant horsepower and accelerators. The 5016 includes Cortex-A53 cores for SSD vendors to run custom applications relevant to SSD management. However, compared to the Gen4 controllers, there are two additional cores in the CPU cluster. The DRAM subsystem includes ECC support (both out-of-band and inline, as desired by the SSD vendor).</p>

<p align="center"><a href="https://www.anandtech.com/show/21514/microship-demonstrates-flashtec-5016-enterprise-ssd-controller"><img alt="" src="https://images.anandtech.com/doci/21514/flashtec-ml_575px.jpg" /></a></p>

<p>At FMS 2024, the company demonstrated an application of the neural network engines embedded in the Gen5 controllers. Controllers usually employ a 'read-retry' operation with altered read-out voltages for flash reads that do not complete successfully. Microchip implemented a machine learning approach to determine the read-out voltage based on the health history of the NAND block using the NN engines in the controller. This approach delivers tangible benefits for read latency and power consumption (thanks to a smaller number of errors on the first read).</p>

<p>The 4016 and 5016 come with a single-chip root of trust implementation for hardware security. A secure boot process with dual-signature authentication ensures that the controller firmware is not maliciously altered in the field. The company also brought out the advantages of their controller's implementation of SR-IOV, flexible data placement, and zoned namespaces along with their 'credit engine' scheme for multi-tenant cloud workloads. These aspects were also brought out in other demonstrations.</p>

<p>Microchip's press release included quotes from the usual NAND vendors - Solidigm, Kioxia, and Micron. On the customer front, Longsys has been using Flashtec controllers in their enterprise offerings along with YMTC NAND. It is likely that this collaboration will continue further using the new 5016 controller.</p>
</p> Storage
G.Skill Intros Low Latency DDR5 Memory Modules: CL30 at 6400 MT/s <p align="center"><a href="https://www.anandtech.com/show/21528/gskill-intros-low-latency-ddr5-modules-cl30-at-6400-mts"><img src="https://images.anandtech.com/doci/21528/gskill-low-latency-modules-678_575px.jpg" alt="" /></a></p><p><p>G.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.</p>

<p>With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.</p>

<p>Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially <a href="https://www.anandtech.com/show/16143/insights-into-ddr5-subtimings-and-latencies">lower than the CL46 timings recommended by JEDEC for this speed bin</a>. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.</p>

<p>G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.</p>

<p>The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.</p>

<p>G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.</p>

<p>The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.</p>
</p> Memory
Sabrent Rocket nano V2 External SSD Review: Phison U18 in a Solid Offering <p align="center"><a href="https://www.anandtech.com/show/21539/sabrent-rocket-nano-v2-external-ssd-review-phison-u18-in-a-solid-offering"><img src="https://images.anandtech.com/doci/21539/carousel_575px.jpg" alt="" /></a></p><p><p>Sabrent's lineup of internal and external SSDs is popular among enthusiasts. The primary reason is the company's tendency to be among the first to market with products based on the latest controllers, while also delivering an excellent value proposition. The company has a long-standing relationship with Phison and adopts its controllers for many of their products. The company's 2 GBps-class portable SSD - the Rocket nano V2 - is based on Phison's U18 native controller. Read on for a detailed look at the Rocket nano V2 External SSD, including an analysis of its performance consistency, power consumption, and thermal profile.</p>
</p> Storage
Rapidus Wants to Offer Fully Automated Packaging for 2nm Fab to Cut Chip Lead Times <p align="center"><a href="https://www.anandtech.com/show/21525/rapidus-2nm-fully-automated-chip-packaging-to-cut-lead-times"><img src="https://images.anandtech.com/doci/21525/intel-foundry-wafer-semiconductor-fab-ifs-678_575px.jpg" alt="" /></a></p><p><p>One of the core challenges that Rapidus will face when it kicks off volume production of chips on its 2nm-class process technology in 2027 is lining up customers. With Intel, Samsung, and TSMC all slated to offer their own 2nm-class nodes by that time, Rapidus will need some kind of advantage to attract customers away from its more established rivals. To that end, the company thinks they've found their edge: fully automated packaging that will allow for shorter chip lead times than manned packaging operations.</p>

<p>In an interview with <a href="https://asia.nikkei.com/Editor-s-Picks/Interview/Japan-s-Rapidus-to-fully-automate-2-nm-chip-fab-president-says">Nikkei</a>, Rapidus' president, Atsuyoshi Koike, outlined the company's vision to use advanced packaging as a competitive edge for the new fab. <a href="https://www.anandtech.com/show/21411/rapidus-adds-chip-packaging-services-to-plans-for-32b-2nm-fab">The Hokkaido facility</a>, which is currently under construction and is expecting to begin equipment installation this December, is already slated to both produce chips and offer advanced packaging services within the same facility, an industry first. But ultimately, Rapidus biggest plan to differentiate itself is by automating the back-end fab processes (chip packaging) to provide significantly faster turnaround times.</p>

<p>Rapidus is targetting back-end production in particular as, compared to front-end (lithography) production, back-end production still heavily relies on human labor. No other advanced packaging fab has fully automated the process thus far, which provides for a degree of flexibility, but slows throughput. But with automation in place to handle this aspect of chip production, Rapidus would be able to increase chip packaging efficiency and speed, which is crucial as chip assembly tasks become more complex. Rapidus is also collaborating with multiple Japanese suppliers to source materials for back-end production. </p>

<p>"In the past, Japanese chipmakers tried to keep their technology development exclusively in-house, which pushed up development costs and made them less competitive," Koike told Nikkei. "[Rapidus plans to] open up technology that should be standardized, bringing down costs, while handling important technology in-house." </p>

<p>Financially, Rapidus faces a significant challenge, needing a total of ¥5 trillion ($35 billion) by the time mass production starts in 2027. The company estimates that ¥2 trillion will be required by 2025 for prototype production. While the Japanese government has provided ¥920 billion in aid, Rapidus still needs to secure substantial funding from private investors.</p>

<p>Due to its lack of track record and experience of chip production as. well as limited visibility for success, Rapidus is finding it difficult to attract private financing. The company is in discussions with the government to make it easier to raise capital, including potential loan guarantees, and is hopeful that new legislation will assist in this effort.</p>
</p> Semiconductors
PCI-SIG Demonstrates PCIe 6.0 Interoperability at FMS 2024 <p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img src="https://images.anandtech.com/doci/21531/pci-sig-carousel_575px.jpg" alt="" /></a></p><p><p>As the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pci-sig-roadmap_575px.jpg" /></a></p>

<p>PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pcie-char_575px.jpg" /></a></p>

<p>The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.</p>

<p>The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pcie-cadence_575px.jpg" /></a></p>

<p>We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.</p>

<p align="center"><a href="https://www.anandtech.com/show/21531/pcisig-demonstrates-pcie-60-interoperability-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21531/pcie-cabling_575px.jpg" /></a></p>

<p>The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.</p>

<p>OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.</p>
</p> Storage
Fadu's FC5161 SSD Controller Breaks Cover in Western Digital's PCIe Gen5 Enterprise Drives <p align="center"><a href="https://www.anandtech.com/show/21532/western-digital-uses-fadu-controller-for-pcie-gen5-enterprise-ssds"><img src="https://images.anandtech.com/doci/21532/wdc-sn861-fadu-678_575px.jpg" alt="" /></a></p><p><p>When Western Digital introduced its Ultrastar DC SN861 SSDs earlier this year, the company did not disclose which controller it used for these drives, which made many observers presume that WD was using an in-house controller. But a recent teardown of the drive shows that is not the case; instead, the company is using a controller from Fadu, a South Korean company founded in 2015 that specializes on enterprise-grade turnkey SSD solutions.</p>

<p>The <a href="https://www.westerndigital.com/products/internal-drives/data-center-drives/ultrastar-dc-sn861-ssd?sku=0TS2531">Western Digital Ultrastar DC SN861 SSD</a> is aimed at performance-hungry hyperscale datacenters and enterprise customers which are adopting PCIe Gen5 storage devices these days. And, as uncovered in photos from a <a href="http://www.storagereview.com/review/western-digital-sn861-gen5-ssd-versatile-solutions-for-modern-hyperscale-and-enterprise-needs">recent Storage Review article</a>, the drive is based on <a href="https://www.fadu.io/en/fc5161-gen5/">Fadu's FC5161 NVMe 2.0-compliant controller</a>. The FC5161 utilizes 16 NAND channels supporting an ONFi 5.0 2400 MT/s interface, and features a combination of enterprise-grade capabilities (OCP Cloud Spec 2.0, SR-IOV, up to 512 name spaces for ZNS support, flexible data placement, NVMe-MI 1.2, advanced security, telemetry, power loss protection) not available on other off-the-shelf controllers – or on any previous Western Digital controllers.  </p>

<p>The Ultrastar DC SN861 SSD offers sequential read speeds up to 13.7 GB/s as well as sequential write speeds up to 7.5 GB/s. As for random performance, it boasts with an up to 3.3 million random 4K read IOPS and up to 0.8 million random 4K write IOPS. The drives are available in capacities between 1.6 TB and 7.68 TB with one or three drive writes per day (DWPD) over five years rating as well as in U.2 and E1.S form-factors. </p>

<p>While the two form factors of the SN861 share a similar technical design, Western Digital has tailored each version for distinct workloads: the E1.S supports FDP and performance enhancements specifically for cloud environments. By contrast, the U.2 model is geared towards high-performance enterprise tasks and emerging applications like AI.</p>

<p>Without any doubts, Western Digital's Ultrastar DC SN861 is a feature-rich high-performance enterprise-grade SSD. It has another distinctive feature: a 5W idle power consumption, which is rather low by the standards of enterprise-grade drives (e.g., it is 1W lower compared to the SN840). While the difference with predecessors may be just 1W, hyperscalers deploy thousands of drives and for their TCO every watt counts.</p>

<p>Western Digital's Ultrastar DC SN861 SSDs are now available for purchase to select customers (such as Meta) and to interested parties. Prices are unknown, but they will depend on such factors as volumes.</p>

<p>Sources: <a href="https://www.fadu.io/en/fc5161-gen5/">Fadu</a>, <a href="https://www.storagereview.com/review/western-digital-sn861-gen5-ssd-versatile-solutions-for-modern-hyperscale-and-enterprise-needs">Storage Review</a></p>
</p> Storage
Kioxia Details BiCS 8 NAND at FMS 2024: 218 Layers With Superior Scaling <p align="center"><a href="https://www.anandtech.com/show/21519/kioxia-details-bics-8-at-fms-2024"><img src="https://images.anandtech.com/doci/21519/bics8-carousel_575px.jpg" alt="" /></a></p><p><p>Kioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital <a href="https://www.kioxia.com/en-jp/business/news/2023/20230330-1.html">announced</a> the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's <a href="https://www.anandtech.com/show/21464">2Tb QLC NAND device</a> and <a href="https://www.anandtech.com/show/21505">coverage</a> of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights.</p>

<p align="center"><a href="https://www.anandtech.com/show/21519/kioxia-details-bics-8-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21519/bics8-nor-cua-cba_575px.jpg" /></a></p>

<p>Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above.</p>

<p align="center"><a href="https://www.anandtech.com/show/21519/kioxia-details-bics-8-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21519/bica8-cba-sem_575px.jpg" /></a></p>

<p>The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit.</p>

<p>Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the <a href="https://www.anandtech.com/show/21492">G9</a> - has 276 layers with a bit density in TLC mode of 21 Gbit/mm<sup>2</sup>, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm<sup>2</sup>.</p>

<p>It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia.</p>
</p> Storage
Kioxia Demonstrates RAID Offload Scheme for NVMe Drives <p align="center"><a href="https://www.anandtech.com/show/21523/kioxia-demonstrates-raid-offload-scheme-for-nvme-drives"><img src="https://images.anandtech.com/doci/21523/raidoff-carousel_575px.jpg" alt="" /></a></p><p><p>At FMS 2024, Kioxia had a proof-of-concept demonstration of their proposed a new RAID offload methodology for enterprise SSDs. The impetus for this is quite clear: as SSDs get faster in each generation, RAID arrays have a major problem of maintaining (and scaling up) performance. Even in cases where the RAID operations are handled by a dedicated RAID card, a simple write request in, say, a RAID 5 array would involve two reads and two writes to different drives. In cases where there is no hardware acceleration, the data from the reads needs to travel all the way back to the CPU and main memory for further processing before the writes can be done.</p>

<p align="center"><a href="https://www.anandtech.com/show/21523/kioxia-demonstrates-raid-offload-scheme-for-nvme-drives"><img alt="" src="https://images.anandtech.com/doci/21523/raidoff-mid_575px.png" /></a></p>

<p>Kioxia has proposed the use of the PCIe direct memory access feature along with the SSD controller's controller memory buffer (CMB) to avoid the movement of data up to the CPU and back. The required parity computation is done by an accelerator block resident within the SSD controller.</p>

<p>In Kioxia's PoC implementation, the DMA engine can access the entire host address space (including the peer SSD's BAR-mapped CMB), allowing it to receive and transfer data as required from neighboring SSDs on the bus. Kioxia noted that their offload PoC saw close to 50% reduction in CPU utilization and upwards of 90% reduction in system DRAM utilization compared to software RAID done on the CPU. The proposed offload scheme can also handle scrubbing operations without taking up the host CPU cycles for the parity computation task.</p>

<p>Kioxia has already taken steps to contribute these features to the NVM Express working group. If accepted, the proposed offload scheme will be part of a standard that could become widely available across multiple SSD vendors.</p>
</p> Storage
The Noctua NH-D15 G2 LBC Cooler Review: Notoriously Big, Incredibly Good <p>When you buy a retail computer CPU, it usually comes with a standard cooler. However, most enthusiasts find that the stock cooler just does not cut it in terms of performance. So, they often end up getting a more advanced cooler that better suits their needs. Choosing the right cooler isn't a one-size-fits-all deal – it is a bit of a journey. You have to consider what you need, what you want, your budget, and how much space you have in your setup. All these factors come into play when picking out the perfect cooler.</p>

<p>When it comes to high-performance coolers, Noctua is a name that frequently comes up among enthusiasts. Known for their exceptional build quality and superb cooling performance, Noctua coolers have been a favorite in the PC building community for years. A typical Noctua cooler will be punctuated by incredibly quiet fans and top-notch cooling efficiency overall, which has made them ideal for overclockers and builders who want to keep their systems running cool and quiet.</p>

<p>In this review, we'll be taking a closer look at the NH-D15 G2 cooler, the successor to the legendary NH-D15. This cooler comes with a hefty price tag of $150 but promises to deliver the best performance that an air cooler can currently achieve. The NH-D15 G2 is available in three versions: one standard version as well as two specialized variants – LBC (Low Base Convexity) and HBC (High Base Convexity). These variants are designed to make better contact with specific CPUs; the LBC is recommended for AMD AM5 processors, while the HBC is tailored for Intel LGA1700 processors, mirroring the slightly different geometry of their respective heatspeaders. Conversely, the standard version is an “one size fits all” approach for users who care more about long-term compatibility over squeezing out every ounce of potential the cooler has.</p>
 Cases/Cooling/PSUs
Western Digital Introduces 4 TB microSDUC, 8 TB SDUC, and 16 TB External SSDs <p align="center"><a href="https://www.anandtech.com/show/21521/western-digital-introduces-4-tb-microsduc-8-tb-sduc-and-16-tb-external-ssds"><img src="https://images.anandtech.com/doci/21521/wd-4-8-16-carousel_575px.jpg" alt="" /></a></p><p><p>Western Digital's BiCS8 218-layer 3D NAND is being put to good use in a wide range of client and enterprise platforms, including WD's upcoming <a href="https://www.anandtech.com/show/21508">Gen 5 client SSDs</a> and <a href="https://www.anandtech.com/show/21505">128 TB-class datacenter SSD</a>. On the external storage front, the company demonstrated four different products: for card-based media, 4 TB microSDUC and 8 TB SDUC cards with UHS-I speeds, and on the portable SSD front we had two 16 TB drives. One will be a SanDisk Desk Drive with external power, and the other in the SanDisk Extreme Pro housing with a lanyard opening in the case.</p>

<p align="center"><a href="https://www.anandtech.com/show/21521/western-digital-introduces-4-tb-microsduc-8-tb-sduc-and-16-tb-external-ssds"><img alt="" src="https://images.anandtech.com/doci/21521/4tb-uduc_575px.jpg" /></a></p>

<p>All of these are using BiCS8 QLC NAND, though I did hear booth talk (as I was taking leave) that they were not supposed to divulge the use of QLC in these products. The 4 TB microSDUC and 8 TB SDUC cards are rated for UHS-I speeds. They are being marketed under the SanDisk Ultra branding.</p>

<p align="center"><a href="https://www.anandtech.com/show/21521/western-digital-introduces-4-tb-microsduc-8-tb-sduc-and-16-tb-external-ssds"><img alt="" src="https://images.anandtech.com/doci/21521/8tb-sduc_575px.jpg" /></a></p>

<p>The SanDisk Desk Drive is an external SSD with a 18W power adapter, and it has been in the market for a few months now. Initially launched in capacities up to 8 TB, Western Digital had promised a 16 TB version before the end of the year. It appears that the product is coming to retail quite soon. One aspect to note is that this drive has been using TLC for the SKUs that are currently in the market, so it appears unlikely that the 16 TB version would be QLC. The units (at least up to the 8 TB capacity point) come with two SN850XE drives. Given the recent <a href="https://www.anandtech.com/show/21472/">introduction of the 8 TB SN850X</a>, an 'E' version with tweaked firmware is likely to be present in the 16 TB Desk Drive.</p>

<p align="center"><a href="https://www.anandtech.com/show/21521/western-digital-introduces-4-tb-microsduc-8-tb-sduc-and-16-tb-external-ssds"><img alt="" src="https://images.anandtech.com/doci/21521/16t-externals_575px.jpg" /></a></p>

<p>The 16 TB portable SSD in the SanDisk Extreme housing was a technology demonstration. It is definitely the highest capacity bus-powered portable SSD demonstrated by any vendor at any trade show thus far. Given the 16 TB Desk Drive's imminent market introduction, it is just a matter of time before the technology demonstration of the bus-powered version becomes a retail reality.</p>
</p> Storage
CXL Gathers Momentum at FMS 2024 <p align="center"><a href="https://www.anandtech.com/show/21533/cxl-gathers-momentum-at-fms-2024"><img src="https://images.anandtech.com/doci/21533/cxl-car-2_575px.jpg" alt="" /></a></p><p><p>The CXL consortium has had a regular presence at FMS (which rechristened itself from 'Flash Memory Summit' to the 'Future of Memory and Storage' this year). Back at FMS 2022, the company had <a href="https://www.anandtech.com/show/17520/compute-express-link-cxl-30-announced-doubled-speeds-and-flexible-fabrics">announced</a> v3.0 of the CXL specifications. This was followed by CXL 3.1's <a href="https://www.businesswire.com/news/home/20231114332690/en/CXL-Consortium-Announces-Compute-Express-Link-3.1-Specification-Release">introduction</a> at Supercomputing 2023. Having started off as a host to device interconnect standard, it had slowly <a href="https://www.anandtech.com/show/17519/">subsumed other competing standards</a> such as OpenCAPI and Gen-Z. As a result, the specifications started to encompass a wide variety of use-cases by building a protocol on top of the the ubiquitous PCIe expansion bus. The CXL consortium comprises of heavyweights such as AMD and Intel, as well as a large number of startup companies attempting to play in different segments on the device side. At FMS 2024, CXL had a prime position in the booth demos of many vendors.</p>

<p align="center"><a href="https://www.anandtech.com/show/21533/cxl-gathers-momentum-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21533/cxl-mem-hier_575px.jpg" /></a></p>

<p>The migration of server platforms from DDR4 to DDR5, along with the rise of workloads demanding large RAM capacity (but not particularly sensitive to either memory bandwidth or latency), has opened up memory expansion modules as one of the first set of widely available CXL devices. Over the last couple of years, we have had product announcements from <a href="https://www.anandtech.com/show/21333">Samsung</a> and <a href="https://www.anandtech.com/show/20003">Micron</a> in this area.</p>

<h3>SK hynix CMM-DDR5 CXL Memory Module and HMSDK</h3>

<p>At FMS 2024, SK hynix was showing off their DDR5-based CMM-DDR5 CXL memory module with a 128 GB capacity. The company was also detailing their associated Heterogeneous Memory Software Development Kit (HMSDK) - a set of libraries and tools at both the kernel and user levels aimed at increasing the ease of use of CXL memory. This is achieved in part by considering the memory pyramid / hierarchy and relocating the data between the server's main memory (DRAM) and the CXL device based on usage frequency.</p>

<p align="center"><a href="https://www.anandtech.com/show/21533/cxl-gathers-momentum-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21533/skh-cmm-ddr5_575px.jpg" /></a></p>

<p>The CMM-DDR5 CXL memory module comes in the SDFF form-factor (E3.S 2T) with a PCIe 3.0 x8 host interface. The internal memory is based on 1α technology DRAM, and the device promises DDR5-class bandwidth and latency within a single NUMA hop. As these memory modules are meant to be used in datacenters and enterprises, the firmware includes features for RAS (reliability, availability, and serviceability) along with secure boot and other management features.</p>

<p>SK hynix was also demonstrating Niagara 2.0 - a hardware solution (currently based on FPGAs) to enable memory pooling and sharing - i.e, connecting multiple CXL memories to allow different hosts (CPUs and GPUs) to optimally share their capacity. The previous version only allowed capacity sharing, but the latest version enables sharing of data also. SK hynix had <a href="https://news.skhynix.com/sk-hynix-presents-ai-memory-solutions-at-cxl-devcon-2024/">presented</a> these solutions at the CXL DevCon 2024 earlier this year, but some progress seems to have been made in finalizing the specifications of the CMM-DDR5 at FMS 2024.</p>

<h3>Microchip and Micron Demonstrate CZ120 CXL Memory Expansion Module</h3>

<p>Micron had <a href="https://www.anandtech.com/show/20003/">unveiled</a> the CZ120 CXL Memory Expansion Module last year based on the Microchip SMC 2000 series CXL memory controller. At FMS 2024, Micron and Microchip had a demonstration of the module on a Granite Rapids server.</p>

<p align="center"><a href="https://www.anandtech.com/show/21533/cxl-gathers-momentum-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21533/mchip-micron_575px.jpg" /></a></p>

<p>Additional insights into the SMC 2000 controller were also provided.</p>

<p align="center"><a href="https://www.anandtech.com/show/21533/cxl-gathers-momentum-at-fms-2024"><img alt="" src="https://images.anandtech.com/doci/21533/mchip-sm2000_575px.png" /></a></p>

<p>The CXL memory controller also incorporates DRAM die failure handling, and Microchip also provides diagnostics and debug tools to analyze failed modules. The memory controller also supports ECC, which forms part of the enterprise... Storage
G.Skill Intros Low Latency DDR5 Memory Modules: CL30 at 6400 MT/s <p align="center"><a href="https://www.anandtech.com/show/21528/gskill-intros-low-latency-ddr5-modules-cl30-at-6400-mts"><img src="https://images.anandtech.com/doci/21528/gskill-low-latency-modules-678_575px.jpg" alt="" /></a></p><p><p>G.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.</p>

<p>With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.</p>

<p>Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially <a href="https://www.anandtech.com/show/16143/insights-into-ddr5-subtimings-and-latencies">lower than the CL46 timings recommended by JEDEC for this speed bin</a>. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.</p>

<p>G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.</p>

<p>The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.</p>

<p>G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.</p>

<p>The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.</p>
</p> Memory
Load More That is All