Samsung has successfully validated its new LPDDR5X-10700 memory with MediaTek's upcoming Dimensity platform. At present, 10.7 GT/s is the highest performing speed grade of LPDDR5X DRAM slated to be released this year, so the upcoming Dimensity 9400 system-on-chip will get the highest memory bandwidth available for a mobile application processor.
The verification process involved Samsung's 16 GB LPDDR5X package and MediaTek's soon-to-be-announced Dimensity 9400 SoC for high-end 5G smartphones. Usage of LPDDR5X-10700 provides a memory bandwidth of 85.6 GB/second over a 64-bit interface, which will be available for bandwidth-hungry applications like graphics and generative AI.
"Working together with Samsung Electronics has made it possible for MediaTek's next-generation Dimensity chipset to become the world's first to be validated at LPDDR5X operating speeds up to 10.7Gbps, enabling upcoming devices to deliver AI functionality and mobile performance at a level we have never seen before," said JC Hsu, Corporate Senior Vice President at MediaTek. "This updated architecture will make it easier for developers and users to leverage more AI capabilities and take advantage of more features with less impact on battery life."
Samsung's LPDDR5X 10.7 GT/s memory in made on the company's 12nm-class DRAM process technology and is said to provide a more than 25% improvement in power efficiency over previous-generation LPDDR5X, in addition to extra performance. This will positively affect improved user experience, including enhanced on-device AI capabilities, such as faster voice-to-text conversion, and better quality graphics.
Overall, the two companies completed this process in just three months. Though it remains to be seen when smartphones based on the Dimensity 9400 application processor and LPDDR5X memory are set to be available on the market, as MediaTek has not yet even formally announced the SoC itself.
"Through our strategic cooperation with MediaTek, Samsung has verified the industry's fastest LPDDR5X DRAM that is poised to lead the AI smartphone market," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "Samsung will continue to innovate through active collaboration with customers and provide optimum solutions for the on-device AI era."
MemorySK hynix to Enter 60 TB SSD Club Next Quarter SK hynix this week reported its financial results for the second quarter, as well as offering a glimpse at its plans for the coming quarters. Notably among the company's plans for the year is the release of a SK hynix-branded 60 TB SSD, which will mark the firm's entry into the ultra-premium enterprise SSD league. "SK hynix plans to expand sales of high-capacity eSSD and lead the market in the second half with 60TB products, expecting eSSD sales to be more than quadrupled compared to last year," a statement by SK hynix reads. Currently there are only two standard form-factor 61.44 TB SSDs on the market: the Solidigm D5-P5336 (U.2/15mm and E1.L), and the Samsung BM1743 (U.2/15mm and E3.S). Both are built from a proprietary controller (Solidigm's controller still carries an Intel logotype) with a PCIe 4.0 x4 interface and use QLC NAND for storage. SK hynix's brief mention of the drive means that tere aren't any formal specifications or capabilities to discuss just yet. But it is reasonable to assume that the company will use its own QLC memory for their ultra-high-capacity drives. What's more intriguing are which controller the company plans to use and how it is going to position its 60 TB-class SSD. Internally, SK hynix has access to multiple controller teams, both of which have the expertise to develop an enterprise-grade controller suitable for a 60 TB drive. SK hynix technically owns Solidigm, the former Intel SSD and NAND unit, giving SK hynix the option of using Solidigm's controller, or even reselling a rebadged D5-P5336 outright. Alternatively, SK hynix has their own (original) internal SSD team, who is responsible for building their well-received Aries SSD controller, among other works. Ultra-high-capacity SSDs for performance demanding read-intensive storage applications, such as AI inference on the edge or content delivery networks, is a promising premium market. So SK hynix is finding itself highly incentivized to enter it with a compelling offering. SSDs
Intel Addresses Desktop Raptor Lake Instability Issues: Faults Excessive Voltage from Microcode, Fix Coming in August In what started last year as a handful of reports about instability with Intel's Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel's latest saga is about to reach its end, as today the company has announced that they've found the cause of the issue, and will be rolling out a microcode fix next month to resolve it. Officially, Intel has been working to identify the cause of desktop Raptor Lake’s instability issues since at least February of this year, if not sooner. In the interim they have discovered a couple of correlating factors – telling motherboard vendors to stop using ridiculous power settings for their out-of-the-box configurations, and finding a voltage-related bug in Enhanced Thermal Velocity Boost (eTVB) – but neither factor was the smoking gun that set all of this into motion. All of which had left Intel to continue searching for the root cause in private, and lots of awkward silence to fill the gaps in the public. But it looks like Intel’s search has finally come to an end – even if Intel isn’t putting the smoking gun on public display quite yet. According to a fresh update posted to the company’s community website, Intel has determined the root cause at last, and has a fix in the works. Per the company’s announcement, Intel has tracked down the cause of the instability issue to “elevated operating voltages”, that at its heart, stems from a flawed algorithm in Intel’s microcode that requested the wrong voltage. Consequently, Intel will be able to resolve the issue through a new microcode update, which pending validation, is expected to be released in the middle of August. Based on extensive analysis of Intel Core 13th/14th Gen desktop processors returned to us due to instability issues, we have determined that elevated operating voltage is causing instability issues in some 13th/14th Gen desktop processors. Our analysis of returned processors confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor. Intel is delivering a microcode patch which addresses the root cause of exposure to elevated voltages. We are continuing validation to ensure that scenarios of instability reported to Intel regarding its Core 13th/14th Gen desktop processors are addressed. Intel is currently targeting mid-August for patch release to partners following full validation. Intel is committed to making this right with our customers, and we continue asking any customers currently experiencing instability issues on their Intel Core 13th/14th Gen desktop processors reach out to Intel Customer Support for further assistance. -Intel Community Post And while there’s nothing good for Intel about Raptor Lake’s instability issues or the need to fix them, that the problem can be ascribed to (or at least fixed by) microcode is about the best possible outcome the company could hope for. Across the full spectrum of potential causes, microcode is the easiest to fix at scale – microcode updates are already distributed through OS updates, and all chips of a given stepping (millions in all) run the same microcode. Even a motherboard BIOS-related issue would be much harder to fix given the vast number of different boards out there, never mind a true hardware flaw that would require Intel to replace even more chips than they already have. Still, we’d also be remiss if we didn’t note that microcode is regularly used to paper over issues further down in the processor, as we’ve most famously seen with the Meltdown/Spectre fixes several years ago. So while Intel is publicly attributing the issue to microcode bugs, there are several more layers to the onion that is modern CPUs that could be playing a part. In that respect, a microcode fix grants the least amoun... CPUs
G.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
MemoryKioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital announced the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's 2Tb QLC NAND device and coverage of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights.
Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above.
The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit.
Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the G9 - has 276 layers with a bit density in TLC mode of 21 Gbit/mm2, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm2.
It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia.
Storage
0 Comments