Later this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
Intel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsG.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
MemoryThanks to the success of the burgeoning market for AI accelerators, NVIDIA has been on a tear this year. And the only place that’s even more apparent than the company’s rapidly growing revenues is in the company’s stock price and market capitalization. After breaking into the top 5 most valuable companies only earlier this year, NVIDIA has reached the apex of Wall Street, closing out today as the world’s most valuable company.
With a closing price of $135.58 on a day that saw NVIDIA’s stock pop up another 3.5%, NVIDIA has topped both Microsoft and Apple in valuation, reaching a market capitalization of $3.335 trillion. This follows a rapid rise in the company’s stock price, which has increased by 47% in the last month alone – particularly on the back of NVIDIA’s most recent estimates-beating earnings report – as well as a recent 10-for-1 stock split. And looking at the company’s performance over a longer time period, NVIDIA’s stock jumped a staggering 218% over the last year, or a mere 3,474% over the last 5 years.
NVIDIA’s ascension continues a trend over the last several years of tech companies all holding the top spots in the market capitalization rankings. Though this is the first time in quite a while that the traditional tech leaders of Apple and Microsoft have been pushed aside.
| Market Capitalization Rankings | ||
| Market Cap | Stock Price | |
| NVIDIA | $3.335T | $135.58 |
| Microsoft | $3.317T | $446.34 |
| Apple | $3.285T | $214.29 |
| Alphabet | $2.170T | $176.45 |
| Amazon | $1.902T | $182.81 |
Driving the rapid growth of NVIDIA and its market capitalization has been demand for AI accelerators from NVIDIA, particularly the company’s server-grade H100, H200, and GH200 accelerators for AI training. As the demand for these products has spiked, NVIDIA has been scaling up accordingly, repeatedly beating market expectations for how many of the accelerators they can ship – and what price they can charge. And despite all that growth, orders for NVIDIA’s high-end accelerators are still backlogged, underscoring how NVIDIA still isn’t meeting the full demands of hyperscalers and other enterprises.
Consequently, NVIDIA’s stock price and market capitalization have been on a tear on the basis of these future expectations. With a price-to-earnings (P/E) ratio of 76.7 – more than twice that of Microsoft or Apple – NVIDIA is priced more like a start-up than a 30-year-old tech company. But then it goes without saying that most 30-year-old tech companies aren’t tripling their revenue in a single year, placing NVIDIA in a rather unique situation at this time.
Like the stock market itself, market capitalizations are highly volatile. And historically speaking, it’s far from guaranteed that NVIDIA will be able to hold the top spot for long, never mind day-to-day fluctuations. NVIDIA, Apple, and Microsoft’s valuations are all within $50 billion (1.%) of each other, so for the moment at least, it’s still a tight race between all three companies. But no matter what happens from here, NVIDIA gets the exceptionally rare claim of having been the most valuable company in the world at some point.
(Carousel image courtesy MSN Money)
GPUsIntel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsG.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
MemoryThanks to the success of the burgeoning market for AI accelerators, NVIDIA has been on a tear this year. And the only place that’s even more apparent than the company’s rapidly growing revenues is in the company’s stock price and market capitalization. After breaking into the top 5 most valuable companies only earlier this year, NVIDIA has reached the apex of Wall Street, closing out today as the world’s most valuable company.
With a closing price of $135.58 on a day that saw NVIDIA’s stock pop up another 3.5%, NVIDIA has topped both Microsoft and Apple in valuation, reaching a market capitalization of $3.335 trillion. This follows a rapid rise in the company’s stock price, which has increased by 47% in the last month alone – particularly on the back of NVIDIA’s most recent estimates-beating earnings report – as well as a recent 10-for-1 stock split. And looking at the company’s performance over a longer time period, NVIDIA’s stock jumped a staggering 218% over the last year, or a mere 3,474% over the last 5 years.
NVIDIA’s ascension continues a trend over the last several years of tech companies all holding the top spots in the market capitalization rankings. Though this is the first time in quite a while that the traditional tech leaders of Apple and Microsoft have been pushed aside.
| Market Capitalization Rankings | ||
| Market Cap | Stock Price | |
| NVIDIA | $3.335T | $135.58 |
| Microsoft | $3.317T | $446.34 |
| Apple | $3.285T | $214.29 |
| Alphabet | $2.170T | $176.45 |
| Amazon | $1.902T | $182.81 |
Driving the rapid growth of NVIDIA and its market capitalization has been demand for AI accelerators from NVIDIA, particularly the company’s server-grade H100, H200, and GH200 accelerators for AI training. As the demand for these products has spiked, NVIDIA has been scaling up accordingly, repeatedly beating market expectations for how many of the accelerators they can ship – and what price they can charge. And despite all that growth, orders for NVIDIA’s high-end accelerators are still backlogged, underscoring how NVIDIA still isn’t meeting the full demands of hyperscalers and other enterprises.
Consequently, NVIDIA’s stock price and market capitalization have been on a tear on the basis of these future expectations. With a price-to-earnings (P/E) ratio of 76.7 – more than twice that of Microsoft or Apple – NVIDIA is priced more like a start-up than a 30-year-old tech company. But then it goes without saying that most 30-year-old tech companies aren’t tripling their revenue in a single year, placing NVIDIA in a rather unique situation at this time.
Like the stock market itself, market capitalizations are highly volatile. And historically speaking, it’s far from guaranteed that NVIDIA will be able to hold the top spot for long, never mind day-to-day fluctuations. NVIDIA, Apple, and Microsoft’s valuations are all within $50 billion (1.%) of each other, so for the moment at least, it’s still a tight race between all three companies. But no matter what happens from here, NVIDIA gets the exceptionally rare claim of having been the most valuable company in the world at some point.
(Carousel image courtesy MSN Money)
GPUsUPDATE 5/17, 6 PM: Western Digital has confirmed that the new 2.5-inch T GB HDDs uses 6 SMR platters
The vast majority of laptops nowadays use solid-state drives, which is why the development of new, higher-capacity 2.5-inch hard drives has all but come to a halt. Or rather, it almost has. It seems that the 2.5-inch form factor has a bit more life left in it after all, as today Western Digital has released a slate of new external storage products based on a new, high-capacity 6 TB 2.5-inch hard drive.
WD's new 6 TB spinner is being used to offer upgraded versions of the company's My Passport, Black P10, and and G-DRIVE ArmorATD portable storage products. Notably, however, WD isn't selling the bare 2.5-inch drive on a standalone basis – at least not yet – so for the time being it's entirely reserved for use in external storage.
Consequently, WD isn't publishing much about the 6 TB hard drive itself. The maximum read speed for these products is listed at 130 MB/sec – the same as WD's existing externals – and write performance goes unmentioned.
Notably, all of these 6 TB devices are thicker than their existing 5 TB counterparts, which strongly suggests that WD has increased their storage capacity not by improving their areal density, but by adding another platter to their existing drive platform (which WD has since confirmed). This, in turn, would help to explain why these new drives are being used in external storage products, as WD's 5 TB 2.5-inch drives are already 15mm thick and using 5 platters. 15mm is the highest standard thickness for a 2.5-inch form-factor, and already incompatible with a decent number of portable devices. External drives, in turn, are the only place these even thicker 2.5-inch drives would fit.
WD's specifications also gloss over whether these drives are based on shingled magnetic recording (SMR) technology. The company was already using SMR for their 5 TB drives in order to hit the necessary storage density there, and WD has since confirmed that this is exactly the case. Which is likely why the company isn't publishing write performance specifications for the drives, as we've seen device-managed SMR drives bottom out as low as 10 MB/second in our testing when the drive needs to rewrite data.
Depending on the specific drive model, all of the external storage drives use either a USB-C connector, or the very quaint USB Micro-B 3.0 connector. Though regardless of the physical connector used, all of the drives feature a USB 3.2 Gen 1 (5Gbps) electrical interface, which is more than ample given the drives' physically-limited transfer speeds.
Wrapping things up, according to WD the new drives are available at retail immediately. The WD My Passport Ultra and WD My Passport Ultra for Mac with USB-C both retail for $199.99; the WD My Passport and WD My Passport for Mac are $179.99; the WD My Passport Works With USB-C is $184.99; the gaming-focused WD_Black P10 Game Drive sells for $184.99, and the SanDisk Professional G-Drive ArmorATD is $229.99. All of Western Digital's external storage drives are backed with a three-year limited warranty.
StorageCustomer demand for AI and HPC processors is driving a much greater use of advanced packaging technologies, particularly TSMC's chip-on-wafer-on-substrate (CoWoS) services. As things stand, TSMC is just barely meeting the current demand for this packaging method – never mind future demand – which is why last year the company announced plans to more than double CoWoS capacity by the end of 2024. But as it turns out, just doubling capacity once won't be enough, and the world's largest contract maker of chips is going to have to keep scaling up at a rapid pace.
At its European Technology Symposium last week TSMC announced plans to expand CoWoS capacity at a compound annual growth rate (CAGR) of over 60% till at least 2026. As a result, TSMC's CoWoS capacity will more than quadruple from 2023 levels by the end of that period. And keeping in mind that TSMC is prepping additional versions of CoWoS (namely CoWoS-L) that will enable building system-in-packages (SiPs) of up to eight reticle sizes, increasing CoWoS capacity by four-fold in three years may still not be enough. The good news is that the various third-party off-site assembly and testing (OSAT) providers are also expanding their CoWoS-like capacity, so the demand for advanced packing isn't a problem that TSMC is facing (or resolving) on their own.
And CoWoS isn't the only advanced packaging technology line whose capacity TSMC is looking to rapidly expand. The company also has its system-on-integrated chips (SoIC) 3D stacking technology which adoption is poised to grow in the coming years. To meet demand for its SoIC packaging methods TSMC will expand SoIC capacity at a 100% compound annual growth rate by the end of 2026. As a result, SoIC capacity will grow by eight-fold from 2023 levels by late 2026.
Overall, TSMC itself expects leading-edge SiPs for demanding applications like AI and HPC will adopt both CoWoS and SoIC 3D stacking technologies in the coming years, which is why it needs to increase capacity for both methods to be able to build those highly-complex processors.
SemiconductorsThe Ultra Ethernet Consortium (UEC) has announced this week that the next-generation interconnection consortium has grown to 55 members. And as the group works towards developing the initial version of their ultra-fast Ethernet standard, they have released some of the first technical details on the upcoming standard.
Formed in the summer of 2023, the UEC aims to develop a new standard for interconnection for AI and HPC datacenter needs, serving as a de-facto (if not de-jure) alternative to InfiniBand, which is largely under the control of NVIDIA these days. The UEC began to accept new members back in November, and just in five months' time it gained 45 new members, which highlights massive interest for the new technology. The consortium now boasts 55 members and 715 industry experts, who are working across eight technical groups.
There is a lot of work at hand for the UEC, as the group has laid out in their latest development blog post, as the consortium works to to build a unified Ethernet-based communication stack for high-performance networking supporting artificial intelligence and high-performance computing clusters. The consortium's technical objectives include developing specifications, APIs, and source code for Ultra Ethernet communications, updating existing protocols, and introducing new mechanisms for telemetry, signaling, security, and congestion management. In particular, Ultra Ethernet introduces the UEC Transport (UET) for higher network utilization and lower tail latency to speed up RDMA (Remote Direct Memory Access) operation over Ethernet. Key features include multi-path packet spraying, flexible ordering, and advanced congestion control, ensuring efficient and reliable data transfer.
These enhancements are designed to address the needs of large AI and HPC clusters — with separate profiles for each type of deployment — though everything is done in a surgical manner to enhance the technology, but reuse as much of the existing Ethernet as possible to maintain cost efficiency and interoperability.
The consortium's founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta, and Microsoft. After the Ultra Ethernet Consortium (UEC) began to accept new members in October, 2023, numerous industry heavyweights have joined the group, including Baidu, Dell, Huawei, IBM, Nokia, Lenovo, Supermicro, and Tencent.
The consortium currently plans to release the initial 1.0 version of the UEC specification publicly sometime in the third quarter of 2024.
"There was always a recognition that UEC was meeting a need in the industry," said J Metz, Chair of the UEC Steering Committee. "There is a strong desire to have an open, accessible, Ethernet-based network specifically designed to accommodate AI and HPC workload requirements. This level of involvement is encouraging; it helps us achieve the goal of broad interoperability and stability."
While it is evident that then Ultra Ethernet Consortium is gaining support across the industry, it is still unclear where other industry behemoths like AWS and Google stand. While the hardware companies involved can design Ultra Ethernet support into their hardware and systems, the technology ultimately exists to serve large datacenter and HPC system operators. So it will be interesting to see what interest they take in (and how quickly they adopt) the nascent Ethernet backbone technology once hardware incorporating it is ready.
NetworkingIntel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsG.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
Memory
0 Comments