In what started last year as a handful of reports about instability with Intel's Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel's latest saga is about to reach its end, as today the company has announced that they've found the cause of the issue, and will be rolling out a microcode fix next month to resolve it.
Officially, Intel has been working to identify the cause of desktop Raptor Lake’s instability issues since at least February of this year, if not sooner. In the interim they have discovered a couple of correlating factors – telling motherboard vendors to stop using ridiculous power settings for their out-of-the-box configurations, and finding a voltage-related bug in Enhanced Thermal Velocity Boost (eTVB) – but neither factor was the smoking gun that set all of this into motion. All of which had left Intel to continue searching for the root cause in private, and lots of awkward silence to fill the gaps in the public.
But it looks like Intel’s search has finally come to an end – even if Intel isn’t putting the smoking gun on public display quite yet. According to a fresh update posted to the company’s community website, Intel has determined the root cause at last, and has a fix in the works.
Per the company’s announcement, Intel has tracked down the cause of the instability issue to “elevated operating voltages”, that at its heart, stems from a flawed algorithm in Intel’s microcode that requested the wrong voltage. Consequently, Intel will be able to resolve the issue through a new microcode update, which pending validation, is expected to be released in the middle of August.
And while there’s nothing good for Intel about Raptor Lake’s instability issues or the need to fix them, that the problem can be ascribed to (or at least fixed by) microcode is about the best possible outcome the company could hope for. Across the full spectrum of potential causes, microcode is the easiest to fix at scale – microcode updates are already distributed through OS updates, and all chips of a given stepping (millions in all) run the same microcode. Even a motherboard BIOS-related issue would be much harder to fix given the vast number of different boards out there, never mind a true hardware flaw that would require Intel to replace even more chips than they already have.
Still, we’d also be remiss if we didn’t note that microcode is regularly used to paper over issues further down in the processor, as we’ve most famously seen with the Meltdown/Spectre fixes several years ago. So while Intel is publicly attributing the issue to microcode bugs, there are several more layers to the onion that is modern CPUs that could be playing a part. In that respect, a microcode fix grants the least amoun... CPUs
JEDEC Plans LPDDR6-Based CAMM, DDR5 MRDIMM Specifications Following a relative lull in the desktop memory industry in the previous decade, the past few years have seen a flurry of new memory standards and form factors enter development. Joining the traditional DIMM/SO-DIMM form factors, we've seen the introduction of space-efficient DDR5 CAMM2s, their LPDDR5-based counterpart the LPCAMM2, and the high-clockspeed optimized CUDIMM. But JEDEC, the industry organization behind these efforts, is not done there. In a press release sent out at the start of the week, the group announced that it is working on standards for DDR5 Multiplexed Rank DIMMs (MRDIMM) for servers, as well as an updated LPCAMM standard to go with next-generation LPDDR6 memory. Just last week Micron introduced the industry's first DDR5 MRDIMMs, which are timed to launch alongside Intel's Xeon 6 server platforms. But while Intel and its partners are moving full steam ahead on MRDIMMs, the MRDIMM specification has not been fully ratified by JEDEC itself. All told, it's not unusual to see Intel pushing the envelope here on new memory technologies (the company is big enough to bootstrap its own ecosystem). But as MRDIMMs are ultimately meant to be more than just a tool for Intel, a proper industry standard is still needed – even if that takes a bit longer. Under the hood, MRDIMMs continue to use DDR5 components, form-factor, pinout, SPD, power management ICs (PMICs), and thermal sensors. The major change with the technology is the introduction of multiplexing, which combines multiple data signals over a single channel. The MRDIMM standard also adds RCD/DB logic in a bid to boost performance, increase capacity of memory modules up to 256 GB (for now), shrink latencies, and reduce power consumption of high-end memory subsystems. And, perhaps key to MRDIMM adoption, the standard is being implemented as a backwards-compatible extension to traditional DDR5 RDIMMs, meaning that MRDIMM-capable servers can use either RDIMMs or MRDIMMs, depending on how the operator opts to configure the system. The MRDIMM standard aims to double the peak bandwidth to 12.8 Gbps, increasing pin speed and supporting more than two ranks. Additionally, a "Tall MRDIMM" form factor is in the works (and pictured above), which is designed to allow for higher capacity DIMMs by providing more area for laying down memory chips. Currently, ultra high capacity DIMMs require using expensive, multi-layer DRAM packages that use through-silicon vias (3DS packaging) to attach the individual DRAM dies; a Tall MRDIMM, on the other hand, can just use a larger number of commodity DRAM chips. Overall, the Tall MRDIMM form factor enables twice the number of DRAM single-die packages on the DIMM. Meanwhile, this week's announcement from JEDEC offers the first significant insight into what to expect from LPDDR6 CAMMs. And despite LPDDR5 CAMMs having barely made it out the door, some significant shifts with LPDDR6 itself means that JEDEC will need to make some major changes to the CAMM standard to accommodate the newer memory type. JEDEC Presentation: The CAMM2 Journey and Future Potential Besides the higher memory clockspeeds allowed by LPDDR6 – JEDEC is targeting data transfer rates of 14.4 GT/s and higher – the new memory form-factor will also incorporate an altogether new connector array. This is to accommodate LPDDR6's wider memory bus, which sees the channel width of an individual memory chip grow from 16-bits wide to 24-bits wide. As a result, the current LPCAMM design, which is intended to match the PC standard of a cumulative 128-bit (16x8) design needs to be reconfigured to match LPDDR6's alterations. Ultimately, JEDEC is targeting a 24-bit subhannel/48-bit channel design, which will result in a 192-bit wide LPCAMM. While the LPCAMM connector itself is set to grow from 14 rows of pins to possibly as high as 20. New memory technologies typically require new DIMMs to begin with, so it's important to clarify that this is not unexpected, but at th... Memory
AMD Delays Ryzen 9000 Launch 1 to 2 Weeks Due to Chip Quality Issues AMD sends word this afternoon that the company is delaying the launch of their Ryzen 9000 series desktop processors. The first Zen 5 architecture-based desktop chips were slated to launch next week, on July 31st. But citing quality issues that are significant enough that AMD is even pulling back stock already sent to distributors, AMD is delaying the launch by one to two weeks. The Ryzen 9000 launch will now be a staggered launch, with the Ryzen 5 9600X and Ryzen 7 9700X launching on August 8th, while the Ryzen 9 9900X and flagship Ryzen 9 9950X will launch a week after that, on August 15th. The exceptional announcement, officially coming from AMD’s SVP and GM of Computing and Graphics, Jack Huynh, is short and to the point. Ahead of the launch, AMD found that “the initial production units that were shipped to our channel partners did not meet our full quality expectations.” And, as a result, the company has needed to delay the launch in order to rectify the issue. Meanwhile, because AMD had already distributed chips to their channel partners – distributors who then filter down to retailers and system builders – this is technically a recall as well, as AMD needs to pull back the first batch of chips and replace them with known good units. That AMD has to essentially take a do-over on initial chip distribution is ultimately what’s driving this delay; it takes the better part of a month to properly seed retailers for a desktop CPU launch with even modest chip volumes, so AMD has to push the launch out to give their supply chain time to catch up. For the moment, there are no further details on what the quality issue with the first batch of chips is, how many are affected, or what any kind of fix may entail. Whatever the issue is, AMD is simply taking back all stock and replacing it with what they’re calling “fresh units.” AMD Ryzen 9000 Series Processors Zen 5 Microarchitecture (Granite Ridge) AnandTech Cores / Threads Base Freq Turbo Freq L2 Cache L3 Cache Memory Support TDP Launch Date Ryzen 9 9950X 16C/32T 4.3GHz 5.7GHz 16 MB 64 MB DDR5-5600 170W 08/15 Ryzen 9 9900X 12C/24T 4.4GHz 5.6GHz 12 MB 64 MB 120W Ryzen 7 9700X 8C/16T 3.8GHz 5.5GHz 8 MB 32 MB 65W 08/08 Ryzen 5 9600X 6C/12T 3.9GHz 5.4GHz 6 MB 32 MB 65W Importantly, however, this announcement is only for the Ryzen 9000 desktop processors, and not the Ryzen AI 300 mobile processors (Strix Point), which are still slated to launch next week. A mobile chip recall would be a much bigger issue (they’re in finished devices that would need significant labor to rework), but also, both the new desktop and mobile Ryzen processors are being made on the same TSMC N4 process node, and have significant overlap due to their shared use of the Zen 5 architecture. To be sure, mobile and desktop are very different dies, but it does strongly imply that whatever the issue is, it’s not a design flaw or a fabrication flaw in the silicon itself. That AMD is able to re-stage the launch of the desktop Ryzen 9000 chips so quickly – on the order of a few weeks – further points to an issue much farther down the line. If indeed the issue isn’t at the silicon level, then that leaves packaging and testing as the next most likely culprit. Whether that means AMD’s packaging partners had some kind of issue assembling the multi-die chips, or if AMD found some other i... CPUs
As the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.
PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.
The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.
The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.
We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.
The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.
OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.
StorageWestern Digital's BiCS8 218-layer 3D NAND is being put to good use in a wide range of client and enterprise platforms, including WD's upcoming Gen 5 client SSDs and 128 TB-class datacenter SSD. On the external storage front, the company demonstrated four different products: for card-based media, 4 TB microSDUC and 8 TB SDUC cards with UHS-I speeds, and on the portable SSD front we had two 16 TB drives. One will be a SanDisk Desk Drive with external power, and the other in the SanDisk Extreme Pro housing with a lanyard opening in the case.
All of these are using BiCS8 QLC NAND, though I did hear booth talk (as I was taking leave) that they were not supposed to divulge the use of QLC in these products. The 4 TB microSDUC and 8 TB SDUC cards are rated for UHS-I speeds. They are being marketed under the SanDisk Ultra branding.
The SanDisk Desk Drive is an external SSD with a 18W power adapter, and it has been in the market for a few months now. Initially launched in capacities up to 8 TB, Western Digital had promised a 16 TB version before the end of the year. It appears that the product is coming to retail quite soon. One aspect to note is that this drive has been using TLC for the SKUs that are currently in the market, so it appears unlikely that the 16 TB version would be QLC. The units (at least up to the 8 TB capacity point) come with two SN850XE drives. Given the recent introduction of the 8 TB SN850X, an 'E' version with tweaked firmware is likely to be present in the 16 TB Desk Drive.
The 16 TB portable SSD in the SanDisk Extreme housing was a technology demonstration. It is definitely the highest capacity bus-powered portable SSD demonstrated by any vendor at any trade show thus far. Given the 16 TB Desk Drive's imminent market introduction, it is just a matter of time before the technology demonstration of the bus-powered version becomes a retail reality.
StorageIntel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsAs the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.
PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.
The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.
The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.
We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.
The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.
OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.
StorageWestern Digital's BiCS8 218-layer 3D NAND is being put to good use in a wide range of client and enterprise platforms, including WD's upcoming Gen 5 client SSDs and 128 TB-class datacenter SSD. On the external storage front, the company demonstrated four different products: for card-based media, 4 TB microSDUC and 8 TB SDUC cards with UHS-I speeds, and on the portable SSD front we had two 16 TB drives. One will be a SanDisk Desk Drive with external power, and the other in the SanDisk Extreme Pro housing with a lanyard opening in the case.
All of these are using BiCS8 QLC NAND, though I did hear booth talk (as I was taking leave) that they were not supposed to divulge the use of QLC in these products. The 4 TB microSDUC and 8 TB SDUC cards are rated for UHS-I speeds. They are being marketed under the SanDisk Ultra branding.
The SanDisk Desk Drive is an external SSD with a 18W power adapter, and it has been in the market for a few months now. Initially launched in capacities up to 8 TB, Western Digital had promised a 16 TB version before the end of the year. It appears that the product is coming to retail quite soon. One aspect to note is that this drive has been using TLC for the SKUs that are currently in the market, so it appears unlikely that the 16 TB version would be QLC. The units (at least up to the 8 TB capacity point) come with two SN850XE drives. Given the recent introduction of the 8 TB SN850X, an 'E' version with tweaked firmware is likely to be present in the 16 TB Desk Drive.
The 16 TB portable SSD in the SanDisk Extreme housing was a technology demonstration. It is definitely the highest capacity bus-powered portable SSD demonstrated by any vendor at any trade show thus far. Given the 16 TB Desk Drive's imminent market introduction, it is just a matter of time before the technology demonstration of the bus-powered version becomes a retail reality.
StorageIntel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsIt is with great sadness that I find myself penning the hardest news post I’ve ever needed to write here at AnandTech. After over 27 years of covering the wide – and wild – world of computing hardware, today is AnandTech’s final day of publication.
For better or worse, we’ve reached the end of a long journey – one that started with a review of an AMD processor, and has ended with the review of an AMD processor. It’s fittingly poetic, but it is also a testament to the fact that we’ve spent the last 27 years doing what we love, covering the chips that are the lifeblood of the computing industry.
A lot of things have changed in the last quarter-century – in 1997 NVIDIA had yet to even coin the term “GPU” – and we’ve been fortunate to watch the world of hardware continue to evolve over the time period. We’ve gone from boxy desktop computers and laptops that today we’d charitably classify as portable desktops, to pocket computers where even the cheapest budget device puts the fastest PC of 1997 to shame.
The years have also brought some monumental changes to the world of publishing. AnandTech was hardly the first hardware enthusiast website, nor will we be the last. But we were fortunate to thrive in the past couple of decades, when so many of our peers did not, thanks to a combination of hard work, strategic investments in people and products, even more hard work, and the support of our many friends, colleagues, and readers.
Still, few things last forever, and the market for written tech journalism is not what it once was – nor will it ever be again. So, the time has come for AnandTech to wrap up its work, and let the next generation of tech journalists take their place within the zeitgeist.
It has been my immense privilege to write for AnandTech for the past 19 years – and to manage it as its editor-in-chief for the past decade. And while I carry more than a bit of remorse in being AnandTech’s final boss, I can at least take pride in everything we’ve accomplished over the years, whether it’s lauding some legendary products, writing technology primers that still remain relevant today, or watching new stars rise in expected places. There is still more that I had wanted AnandTech to do, but after 21,500 articles, this was a good start.
And while the AnandTech staff is riding off into the sunset, I am happy to report that the site itself won’t be going anywhere for a while. Our publisher, Future PLC, will be keeping the AnandTech website and its many articles live indefinitely. So that all of the content we’ve created over the years remains accessible and citable. Even without new articles to add to the collection, I expect that many of the things we’ve written over the past couple of decades will remain relevant for years to come – and remain accessible just as long.
The AnandTech Forums will also continue to be operated by Future’s community team and our dedicated troop of moderators. With forum threads going back to 1999 (and some active members just as long), the forums have a history almost as long and as storied as AnandTech itself (wounded monitor children, anyone?). So even when AnandTech is no longer publishing articles, we’ll still have a place for everyone to talk about the latest in technology – and have those discussions last longer than 48 hours.
Finally, for everyone who still needs their technical writing fix, our formidable opposition of the last 27 years and fellow Future brand, Tom’s Hardware, is continuing to cover the world of technology. There are a couple of familiar AnandTech faces already over there providing their accumulated expertise, and the site will continue doing its best to provide a written take on technology news.
As I look back on everything AnandTech has accomplished over the past 27 years, there are more than a few people, groups, and companies that I would like to thank on behalf of both myself and AnandTech as a whole.
First and foremost, I cannot thank enough all the editors who have worked for AnandTech over the years. T... Site Updates
Microchip recently announced the availability of their second PCIe Gen 5 enterprise SSD controller - the Flashtec 5016. Like the 4016, this is also a 16-channel controller, but there are some key updates:
Microchip's enterprise SSD controllers provide a high level of flexibility to SSD vendors by providing them with significant horsepower and accelerators. The 5016 includes Cortex-A53 cores for SSD vendors to run custom applications relevant to SSD management. However, compared to the Gen4 controllers, there are two additional cores in the CPU cluster. The DRAM subsystem includes ECC support (both out-of-band and inline, as desired by the SSD vendor).
At FMS 2024, the company demonstrated an application of the neural network engines embedded in the Gen5 controllers. Controllers usually employ a 'read-retry' operation with altered read-out voltages for flash reads that do not complete successfully. Microchip implemented a machine learning approach to determine the read-out voltage based on the health history of the NAND block using the NN engines in the controller. This approach delivers tangible benefits for read latency and power consumption (thanks to a smaller number of errors on the first read).
The 4016 and 5016 come with a single-chip root of trust implementation for hardware security. A secure boot process with dual-signature authentication ensures that the controller firmware is not maliciously altered in the field. The company also brought out the advantages of their controller's implementation of SR-IOV, flexible data placement, and zoned namespaces along with their 'credit engine' scheme for multi-tenant cloud workloads. These aspects were also brought out in other demonstrations.
Microchip's press release included quotes from the usual NAND vendors - Solidigm, Kioxia, and Micron. On the customer front, Longsys has been using Flashtec controllers in their enterprise offerings along with YMTC NAND. It is likely that this collaboration will continue further using the new 5016 controller.
StorageWhen Western Digital introduced its Ultrastar DC SN861 SSDs earlier this year, the company did not disclose which controller it used for these drives, which made many observers presume that WD was using an in-house controller. But a recent teardown of the drive shows that is not the case; instead, the company is using a controller from Fadu, a South Korean company founded in 2015 that specializes on enterprise-grade turnkey SSD solutions.
The Western Digital Ultrastar DC SN861 SSD is aimed at performance-hungry hyperscale datacenters and enterprise customers which are adopting PCIe Gen5 storage devices these days. And, as uncovered in photos from a recent Storage Review article, the drive is based on Fadu's FC5161 NVMe 2.0-compliant controller. The FC5161 utilizes 16 NAND channels supporting an ONFi 5.0 2400 MT/s interface, and features a combination of enterprise-grade capabilities (OCP Cloud Spec 2.0, SR-IOV, up to 512 name spaces for ZNS support, flexible data placement, NVMe-MI 1.2, advanced security, telemetry, power loss protection) not available on other off-the-shelf controllers – or on any previous Western Digital controllers.
The Ultrastar DC SN861 SSD offers sequential read speeds up to 13.7 GB/s as well as sequential write speeds up to 7.5 GB/s. As for random performance, it boasts with an up to 3.3 million random 4K read IOPS and up to 0.8 million random 4K write IOPS. The drives are available in capacities between 1.6 TB and 7.68 TB with one or three drive writes per day (DWPD) over five years rating as well as in U.2 and E1.S form-factors.
While the two form factors of the SN861 share a similar technical design, Western Digital has tailored each version for distinct workloads: the E1.S supports FDP and performance enhancements specifically for cloud environments. By contrast, the U.2 model is geared towards high-performance enterprise tasks and emerging applications like AI.
Without any doubts, Western Digital's Ultrastar DC SN861 is a feature-rich high-performance enterprise-grade SSD. It has another distinctive feature: a 5W idle power consumption, which is rather low by the standards of enterprise-grade drives (e.g., it is 1W lower compared to the SN840). While the difference with predecessors may be just 1W, hyperscalers deploy thousands of drives and for their TCO every watt counts.
Western Digital's Ultrastar DC SN861 SSDs are now available for purchase to select customers (such as Meta) and to interested parties. Prices are unknown, but they will depend on such factors as volumes.
Sources: Fadu, Storage Review
StorageIt is with great sadness that I find myself penning the hardest news post I’ve ever needed to write here at AnandTech. After over 27 years of covering the wide – and wild – world of computing hardware, today is AnandTech’s final day of publication.
For better or worse, we’ve reached the end of a long journey – one that started with a review of an AMD processor, and has ended with the review of an AMD processor. It’s fittingly poetic, but it is also a testament to the fact that we’ve spent the last 27 years doing what we love, covering the chips that are the lifeblood of the computing industry.
A lot of things have changed in the last quarter-century – in 1997 NVIDIA had yet to even coin the term “GPU” – and we’ve been fortunate to watch the world of hardware continue to evolve over the time period. We’ve gone from boxy desktop computers and laptops that today we’d charitably classify as portable desktops, to pocket computers where even the cheapest budget device puts the fastest PC of 1997 to shame.
The years have also brought some monumental changes to the world of publishing. AnandTech was hardly the first hardware enthusiast website, nor will we be the last. But we were fortunate to thrive in the past couple of decades, when so many of our peers did not, thanks to a combination of hard work, strategic investments in people and products, even more hard work, and the support of our many friends, colleagues, and readers.
Still, few things last forever, and the market for written tech journalism is not what it once was – nor will it ever be again. So, the time has come for AnandTech to wrap up its work, and let the next generation of tech journalists take their place within the zeitgeist.
It has been my immense privilege to write for AnandTech for the past 19 years – and to manage it as its editor-in-chief for the past decade. And while I carry more than a bit of remorse in being AnandTech’s final boss, I can at least take pride in everything we’ve accomplished over the years, whether it’s lauding some legendary products, writing technology primers that still remain relevant today, or watching new stars rise in expected places. There is still more that I had wanted AnandTech to do, but after 21,500 articles, this was a good start.
And while the AnandTech staff is riding off into the sunset, I am happy to report that the site itself won’t be going anywhere for a while. Our publisher, Future PLC, will be keeping the AnandTech website and its many articles live indefinitely. So that all of the content we’ve created over the years remains accessible and citable. Even without new articles to add to the collection, I expect that many of the things we’ve written over the past couple of decades will remain relevant for years to come – and remain accessible just as long.
The AnandTech Forums will also continue to be operated by Future’s community team and our dedicated troop of moderators. With forum threads going back to 1999 (and some active members just as long), the forums have a history almost as long and as storied as AnandTech itself (wounded monitor children, anyone?). So even when AnandTech is no longer publishing articles, we’ll still have a place for everyone to talk about the latest in technology – and have those discussions last longer than 48 hours.
Finally, for everyone who still needs their technical writing fix, our formidable opposition of the last 27 years and fellow Future brand, Tom’s Hardware, is continuing to cover the world of technology. There are a couple of familiar AnandTech faces already over there providing their accumulated expertise, and the site will continue doing its best to provide a written take on technology news.
As I look back on everything AnandTech has accomplished over the past 27 years, there are more than a few people, groups, and companies that I would like to thank on behalf of both myself and AnandTech as a whole.
First and foremost, I cannot thank enough all the editors who have worked for AnandTech over the years. T... Site Updates
Microchip recently announced the availability of their second PCIe Gen 5 enterprise SSD controller - the Flashtec 5016. Like the 4016, this is also a 16-channel controller, but there are some key updates:
Microchip's enterprise SSD controllers provide a high level of flexibility to SSD vendors by providing them with significant horsepower and accelerators. The 5016 includes Cortex-A53 cores for SSD vendors to run custom applications relevant to SSD management. However, compared to the Gen4 controllers, there are two additional cores in the CPU cluster. The DRAM subsystem includes ECC support (both out-of-band and inline, as desired by the SSD vendor).
At FMS 2024, the company demonstrated an application of the neural network engines embedded in the Gen5 controllers. Controllers usually employ a 'read-retry' operation with altered read-out voltages for flash reads that do not complete successfully. Microchip implemented a machine learning approach to determine the read-out voltage based on the health history of the NAND block using the NN engines in the controller. This approach delivers tangible benefits for read latency and power consumption (thanks to a smaller number of errors on the first read).
The 4016 and 5016 come with a single-chip root of trust implementation for hardware security. A secure boot process with dual-signature authentication ensures that the controller firmware is not maliciously altered in the field. The company also brought out the advantages of their controller's implementation of SR-IOV, flexible data placement, and zoned namespaces along with their 'credit engine' scheme for multi-tenant cloud workloads. These aspects were also brought out in other demonstrations.
Microchip's press release included quotes from the usual NAND vendors - Solidigm, Kioxia, and Micron. On the customer front, Longsys has been using Flashtec controllers in their enterprise offerings along with YMTC NAND. It is likely that this collaboration will continue further using the new 5016 controller.
StorageKioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital announced the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's 2Tb QLC NAND device and coverage of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights.
Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above.
The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit.
Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the G9 - has 276 layers with a bit density in TLC mode of 21 Gbit/mm2, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm2.
It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia.
StorageAs the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.
PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.
The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.
The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.
We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.
The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.
OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.
StorageWestern Digital's BiCS8 218-layer 3D NAND is being put to good use in a wide range of client and enterprise platforms, including WD's upcoming Gen 5 client SSDs and 128 TB-class datacenter SSD. On the external storage front, the company demonstrated four different products: for card-based media, 4 TB microSDUC and 8 TB SDUC cards with UHS-I speeds, and on the portable SSD front we had two 16 TB drives. One will be a SanDisk Desk Drive with external power, and the other in the SanDisk Extreme Pro housing with a lanyard opening in the case.
All of these are using BiCS8 QLC NAND, though I did hear booth talk (as I was taking leave) that they were not supposed to divulge the use of QLC in these products. The 4 TB microSDUC and 8 TB SDUC cards are rated for UHS-I speeds. They are being marketed under the SanDisk Ultra branding.
The SanDisk Desk Drive is an external SSD with a 18W power adapter, and it has been in the market for a few months now. Initially launched in capacities up to 8 TB, Western Digital had promised a 16 TB version before the end of the year. It appears that the product is coming to retail quite soon. One aspect to note is that this drive has been using TLC for the SKUs that are currently in the market, so it appears unlikely that the 16 TB version would be QLC. The units (at least up to the 8 TB capacity point) come with two SN850XE drives. Given the recent introduction of the 8 TB SN850X, an 'E' version with tweaked firmware is likely to be present in the 16 TB Desk Drive.
The 16 TB portable SSD in the SanDisk Extreme housing was a technology demonstration. It is definitely the highest capacity bus-powered portable SSD demonstrated by any vendor at any trade show thus far. Given the 16 TB Desk Drive's imminent market introduction, it is just a matter of time before the technology demonstration of the bus-powered version becomes a retail reality.
Storage
0 Comments