Western Digital this week is previewing the industry's first 4 TB SD card. The device is being showcased at the NAB trade show for broadcasters and content creators and will be released commercially in 2025.
Western Digital's SanDisk Extreme Pro SDUC 4 TB SD card complies with the Secure Digital Ultra Capacity standard (SDUC, which enables up to 128TB). The card uses the Ultra High Speed-I (UHS-I) interface and is rated for speed Class 10, therefore supporting a minimum speed of 10 MB/s and a maximum data transfer rate of 104 MB/s when working in UHS104 (SDR104) mode (there is a catch about performance, but more on that later). WD's SD card is also rated to meet Video Speed Class V30, supporting a minimal sequential write speed of 30 MB/s, which is believed to be good enough for 8K video recording, above and beyond the 4K video market that Western Digital is primarily aiming the forthcoming card at.
For now, Western Digital is not disclosing what NAND is in the SanDisk Extreme Pro SDUC 4 TB SD card. Given the high capacity and relatively distant 2025 release date, WD may be targetting this as one of their first products to use their forthcoming BiCS 9 NAND.
And while not listed in WD's official press release, we would be surprised if the forthcoming card didn't also support the off-spec DDR200/DDR208 mode, which allows for higher transfer rates than the UHS-I standard normally allows via double data rate signaling. Western Digital's current-generation SanDisk Extreme Pro SDXC 1 TB SD card already supports that mode, allowing it to reach read speeds as high as 170 MB/s, so it would be surprising to see the company drop it from newer products. That said, the catch with DDR208 remains the same as always: it's a proprietary mode that requires a compatible host to make use of.
Western Digital has not disclosed how much will its SanDisk Extreme Pro SDUC 4 TB SD card cost. A 1 TB SanDisk Extreme Pro card costs $140, so one can make guesses about the price of a 4 TB SD card that uses cutting-edge NAND.
StorageWhile desktop PC power supplies receive the bulk of enthusiasts’ attention for good reasons – not the least of which being the vast selection of options that comes from being a standardized part – power supplies as a whole aren’t just a concern for big PCs. With the majority of PC sales having flipped to notebook sales some years ago, notebook power supplies already lead the market in volume. And while most laptops don’t need the kind of massively powered PSUs that dominate the desktop space, even that is slowly changing as desktop replacement-class laptops increasingly grow in performance and power consumption.
Perhaps the poster-child for the high-performance, high-powered laptop is Eurocom, who has a long history of developing heavy-duty DTR laptops that are all but portable desktop systems. Following recent trends in GPU power consumption and in GPU popularity for both gaming and AI workloads, the company has been selling a range of “mobile supercomputers”, which are practically large, tailor-made laptops that rival even the best of desktop computers – and have the power requirements to match.
In order to power their own high-end laptops, Eurocom has developed a rather unusual 780W AC adapter for high-performance laptops. Intended to power the most powerful of powerful laptops, their latest power adapter can supply over three-quarters of a kilowatt of DC power; a desktop-like wattage for a desktop-like device. It’s a laptop power adapter that’s unique in its scale, and with some room to scale up even further.
Eurocom 780W AC Adapter Power specifications |
|
RAIL | +20V |
MAX OUTPUT | 39A |
780W | |
DIMENSIONS | 325 x 110 x 40mm, 1.38kg |
AC INPUT | 90 - 264 VAC, 50 - 60 Hz |
MSRP | PSU: $299, Cable: $129+ |
Eurocom has offered this adapter for a few years now. But it’s taken on a new life as more laptops have been released that use such a large adapter (or two smaller 300W-ish power adapters). And, critically, Eurocom has made additional DC cable/connector sets for other manufactuers’ laptops, significantly increasing the number of systems it’s compatible with. Currently, Eurocom offers DC cables for the following laptops:
Externally, the Eurocom 780W AC adapter looks like a hybrid between a PC PSU and a laptop adapter. It is shaped like an oversized laptop AC adapter but has a metallic body and cooling vents/fans. Eurocom advertises that it weighs “only 1.34kg”, which is the weight of an entire 14-inch laptop by comparison, but is relatively lightweight for a PSU with that kind of power output. The company logo is debossed across the top side of the adapter, with four round anti-slip pads found at its bottom. Care needs to be taken while moving it around because the metallic edges and weight can spell trouble for wooden surfaces.
Eurocom installed a small information display on the AC adapter, which shows the current, voltage, and power output, as well as the temperature of the adapter. It is not perfectly accur... Cases/Cooling/PSUs
Across the internet, from online forums such as Reddit to various other tech media outlets, there's a lot of furor around reports of Intel's top-end 14th and 13th Gen K series of processors running into stability issues. As Intel's flagship chips, these parts come aggressively clocked in order to maximize performance through various implementations of boost and turbo, leaving them running close to their limits out of the box. But with high-end motherboards further goosing these chips to wring even more performance out of them, it would seem that the Intel desktop ecosystem has finally reached a tipping point where all of these efforts to boost performance have pushed these flagship chips to unstable conditions. To that end, Intel has released new gudiance to its consumer motherboard partners, strongly encouraging them to actually implment Intel's stock power settings, and to use those baseline settings as their out-of-the-box default.
While the underlying conditions are nothing new – we've published stories time and time again about motherboard features such as multi-core enhancement (MCE) and raised power consumption limits that seek to maximize how hard and how long systems are able to turbo boost – the issue has finally come to a head in the last couple of months thanks to accumulating reports of system instability with Intel's 13900K and 14900K processors. These instability problems are eventually solved by either tamping down on these motherboard performance-boosting features – bringing the chips back down to something closer to Intel's official operating parameters – or downclocking the chips entirely.
Intel first began publicly investigating the matter on the 27th of February, when Intel's Communications Manager, Thomas Hannaford, posted a thread on Intel's Community Product Support Forms titled "Regarding Reports of 13th/14th Gen Unlocked Desktop Users Experiencing Stability Issues". In this thread, Thomas Hannaford said, "Intel is aware of reports regarding Intel Core 13th and 14th Gen unlocked desktop processors experiencing issues with certain workloads. We're engaged with our partners and are conducting analysis of the reported issues. If you are experiencing these issues, please reach out to Intel Customer Support for further assistance in the interim."
Since that post went up, additional reports have been circulating about instability issues across various online forums and message boards. The underlying culprit has been theorized to be motherboards implementing an array of strategies to improve chip performance, including aggressive multi-core enhancement settings, "unlimited" PL2 turbo, and reduced load line calibration settings. At no point do any of these settings overclock a CPU and push it to a higher clockspeed than it's validated for, but these settings do everything possible to keep a chip at the highest clockspeed possible at all times – and in the process seem to have gone a step too far.
From "Why Intel Processors Draw More Power Than Expected: TDP and Turbo Explained"
We wrote a piece initially covering multi-core enhancement in 2012, detailing how motherboard manufacturers try to stay competitive with each other and leverage any headroom within the silicon to output the highest performance levels. And more recently, we've talked about how desktop systems with Intel chips are now regularly exceeding their rated TDPs – sometimes by extreme amounts – as motherboard vendors continue to push them to run as hard as possible for the best performance.
But things have changed since 2012. At the time, this wasn't so much of an issue, as overclocking was actually very favorable to increasing the performance of processors. But in 2024 with chips such as the Intel Core i9-14900K, we have CPUs shipping with a maximum turbo clock speed of 6.0 GHz and a peak power consumption of over 400 Watts, figures that were only a pipe dream a decade ago.
Jumping to the present time, over the weekend Intel released a statement about the matter to its partners, outlining their investigation so far and their sug... CPUs
Corsair has introduced a family of registered memory modules with ECC that are designed for AMD's Ryzen Threadripper 7000 and Intel's Xeon W-2400/3400-series processors. The new Corsair WS DDR5 RDIMMs with AMD EXPO and Intel XMP 3.0 profiles will be available in kits of up to 256 GB capacity and at speeds of up to 6400 MT/s.
Corsair's family of WS DDR5 RDIMMs includes 16 GB modules operating at up to 6400 MT/s with CL32 latency as well as 32 GB modules functioning at 5600 MT/s with CL40 latency. At present, Corsair offers a quad-channel 64 GB kit (4×16GB, up to 6400 MT/s), a quad-channel 128GB kit (4×32GB, 5600 MT/s), an eight-channel 128 GB kit (8×16GB, 5600 MT/s), and an eight-channel 256 GB kit (8×32GB, 5600 MT/s) and it remains to be seen whether the company will expand the lineup.
Corsair's WS DDR5 RDIMMs are designed for AMD's TRX50 and WRX90 platforms as well as Intel's W790 platform and are therefore compatible with AMD's Ryzen Threadripper Pro 7000 and 7000WX-series as well as Intel's Xeon W-2400/3400-series CPUs. The modules feature both AMD EXPO and Intel XMP 3.0 profiles to easily set their beyond-JEDEC-spec settings and come with thin heat spreaders made of pyrolytic graphite sheet (PGS), which thermal conductivity than that of copper and aluminum of the same thickness. For now, Corsair does not disclose which RCD and memory chips its registered memory modules use.
Unlike many of its rivals among leading DIMM manufacturers, Corsair did not introduce its enthusiast-grade RDIMMs when AMD and Intel released their Ryzen Threadripper and Xeon W-series platforms for extreme workstations last year. It is hard to tell what the reason for that is, but perhaps the company wanted to gain experience working with modules featuring registered clock drivers (RCDs) as well as AMD's and Intel's platforms for extreme workstations.
The result of the delay looks to be quite rewarding: unlike modules from its competitors that either feature AMD EXPO or Intel XMP 3.0 profiles, Corsair's WS DDR5 RDIMMs come with both. While this may not be important on the DIY market where people know exactly what they are buying for their platform, this is a great feature for system integrators, which can use Corsair WS DDR5 RDIMMs both for their AMD Ryzen Threadripper and Intel Xeon W-series builds, something that greatly simplifies their inventory management.
Since Corsair's WS DDR5 RDIMMs are aimed at workstations and are tested to offer reliable performance beyond JEDEC specifications, they are quite expensive. The cheapest 64 GB DDR5-5600 CL40 kit costs $450, the fastest 64 GB DDR5-6400 CL32 kit is priced at $460, whereas the highest end 256 GB DDR5-5600 CL40 kit is priced at $1,290.
MemoryAMD has recently expanded its Ryzen 8000 series by introducing the Ryzen 7 8700F and Ryzen 5 8400F processors. Initially launched in China, these chips were added to AMD's global website, signaling they are available worldwide, apparently from April 1st. Built from the recent Zen 4-based Phoenix APUs using the TSMC 4nm node as their Zen 4 mobile chips, these new CPUs lack integrated graphics. However, the Ryzen 7 8700F does include the integrated Ryzen AI NPU for added capabilities in a world currently dominated by AI and moving it directly into the PC.
The company's decision to announce these chips in China aligns with its strategy to offer Ryzen solutions at every price point in the market. Although AMD didn't initially disclose the full specifications of these F-series models, and we did reach out to the company to ask about them, they refused to discuss them with us. Their listing on the website has now been updated with a complete list of specifications and features, with everything but the price mentioned.
AMD Ryzen 8000G vs. Ryzen 8000F Series (Desktop) Zen 4 (Phoenix) |
|||||||||
AnandTech | Cores/Threads | Base Freq |
Turbo Freq |
GPU | GPU Freq |
Ryzen AI (NPU) |
L3 Cache (MB) |
TDP | MSRP |
Ryzen 7 | |||||||||
Ryzen 7 8700G | 8/16 | 4200 | 5100 | R780M 12 CUs |
2900 | Y | 16 | 65W | $329 |
Ryzen 7 8700F | 8/16 | 4100 | 5000 | - | - | Y | 16 | 65W | ? |
Ryzen 5 | |||||||||
Ryzen 5 8600G | 6/12 | 4300 | 5000 | R760M 8 CUs |
2800 | Y | 16 | 65W | $229 |
Ryzen 5 8400F | 6/12 | 4200 | 4700 | - | - | N | 16 | 65W | ? |
The Ryzen 7 8700F features an 8C/16T design, with 16MB of L3 cache and the same 65W TDP as the Ryzen 7 8700G. Although the base clock speed is 4.1 GHz, it boosts to 5.0 GHz; this is 100 MHz less on both base/boost clocks than the 8700G. Meanwhile, the Ryzen 5 8400F is a slightly scaled-down version of the Ryzen 8600G APU, with 6C/12, 16MB of L3 cache, and again has a 100 MHz reduction to base clocks compared to the 8600G. Unlike the Ryzen 5 8400F, the Ryzen 7 8700F keeps AMD's Ryzen AI NPU, adding additional capability for generative AI.
The Ryzen 5 8400F can boost up to 4.7 GHz, 300 MHz slower than the Ryzen 5 8600G. AMD also allows overclocking for these new F-series chips, which means users could potentially boost the performance of these processors to match their G-series equivalents.
Pricing details are still pending, but to remain competitive, AMD will likely need to price these CPUs below the 8700G and 8600G, as well as the Ryzen 7 7700 and Ryzen 5 7600. These CPUs offer, albeit very limited, integrated graphics and have double the L3 cache capacity, along with higher boost clocks than the 8000F series chips, so pricing is something to consider whenever pricing becomes available.
CPUsAs Seagate ramps up shipments of its new heat assisted magnetic recording (HAMR)-based Mozaic 3+ hard drive platform, the company is both in the enviable position of shipping the first major new hard drive technology in a decade, and the much less enviable position of proving the reliability of the first major new hard drive technology in a decade. Due to HAMR's use of temporal heating with its platters, as well as all-new read/write heads, HAMR introduces multiple new changes at once that have raise questions about how reliable the technology will be. Looking to address these matters (and further promote their HAMR drives), Seagate has published a fresh blog post outlining the company's R&D efforts, and why the company expects their HAMR drives to last several years – as long or longer than current PMR hard drives.
According to the company, the reliability of Mozaic 3+ drives on par with traditional drives relying on perpendicular magnetic recording (PMR), the company says. In fact, components of HAMR HDDs have demonstrated a 50% increase in reliability over the past two years. Seagate says that Mozaic 3+ drives boast impressive durability metrics: their read/write heads have demonstrated capacity to handle over 3.2 petabytes of data transfer over 6,000 hours of operation, which exceeds data transfers of typical nearline hard drives by 20 times. Accordingly, Seagate is rating these drives for a mean time between failure (MTBF) 2.5 million hours, which is in-line with PMR-based drives.
Based on their field stress tests, involving over 500,000 Mozaic 3+ drives, Seagate says that the heads of Mozaic 3+ drives will last over seven years, surpassing the typical lifespan of current PMR-based drives. Generally, customers anticipate that modern PMR drives will last between four and five years with average usage, so these drives would exceed current expectations.
Altogether, Seagate is continuing aim for a seamless transition from PMR to HAMR drives in customer systems. That means ensuring that these new drives can fit into existing data center infrastructures without requiring any changes to enterprise specifications, warranty conditions, or form factors.
StorageAcross the internet, from online forums such as Reddit to various other tech media outlets, there's a lot of furor around reports of Intel's top-end 14th and 13th Gen K series of processors running into stability issues. As Intel's flagship chips, these parts come aggressively clocked in order to maximize performance through various implementations of boost and turbo, leaving them running close to their limits out of the box. But with high-end motherboards further goosing these chips to wring even more performance out of them, it would seem that the Intel desktop ecosystem has finally reached a tipping point where all of these efforts to boost performance have pushed these flagship chips to unstable conditions. To that end, Intel has released new gudiance to its consumer motherboard partners, strongly encouraging them to actually implment Intel's stock power settings, and to use those baseline settings as their out-of-the-box default.
While the underlying conditions are nothing new – we've published stories time and time again about motherboard features such as multi-core enhancement (MCE) and raised power consumption limits that seek to maximize how hard and how long systems are able to turbo boost – the issue has finally come to a head in the last couple of months thanks to accumulating reports of system instability with Intel's 13900K and 14900K processors. These instability problems are eventually solved by either tamping down on these motherboard performance-boosting features – bringing the chips back down to something closer to Intel's official operating parameters – or downclocking the chips entirely.
Intel first began publicly investigating the matter on the 27th of February, when Intel's Communications Manager, Thomas Hannaford, posted a thread on Intel's Community Product Support Forms titled "Regarding Reports of 13th/14th Gen Unlocked Desktop Users Experiencing Stability Issues". In this thread, Thomas Hannaford said, "Intel is aware of reports regarding Intel Core 13th and 14th Gen unlocked desktop processors experiencing issues with certain workloads. We're engaged with our partners and are conducting analysis of the reported issues. If you are experiencing these issues, please reach out to Intel Customer Support for further assistance in the interim."
Since that post went up, additional reports have been circulating about instability issues across various online forums and message boards. The underlying culprit has been theorized to be motherboards implementing an array of strategies to improve chip performance, including aggressive multi-core enhancement settings, "unlimited" PL2 turbo, and reduced load line calibration settings. At no point do any of these settings overclock a CPU and push it to a higher clockspeed than it's validated for, but these settings do everything possible to keep a chip at the highest clockspeed possible at all times – and in the process seem to have gone a step too far.
From "Why Intel Processors Draw More Power Than Expected: TDP and Turbo Explained"
We wrote a piece initially covering multi-core enhancement in 2012, detailing how motherboard manufacturers try to stay competitive with each other and leverage any headroom within the silicon to output the highest performance levels. And more recently, we've talked about how desktop systems with Intel chips are now regularly exceeding their rated TDPs – sometimes by extreme amounts – as motherboard vendors continue to push them to run as hard as possible for the best performance.
But things have changed since 2012. At the time, this wasn't so much of an issue, as overclocking was actually very favorable to increasing the performance of processors. But in 2024 with chips such as the Intel Core i9-14900K, we have CPUs shipping with a maximum turbo clock speed of 6.0 GHz and a peak power consumption of over 400 Watts, figures that were only a pipe dream a decade ago.
Jumping to the present time, over the weekend Intel released a statement about the matter to its partners, outlining their investigation so far and their sug... CPUs
While desktop PC power supplies receive the bulk of enthusiasts’ attention for good reasons – not the least of which being the vast selection of options that comes from being a standardized part – power supplies as a whole aren’t just a concern for big PCs. With the majority of PC sales having flipped to notebook sales some years ago, notebook power supplies already lead the market in volume. And while most laptops don’t need the kind of massively powered PSUs that dominate the desktop space, even that is slowly changing as desktop replacement-class laptops increasingly grow in performance and power consumption.
Perhaps the poster-child for the high-performance, high-powered laptop is Eurocom, who has a long history of developing heavy-duty DTR laptops that are all but portable desktop systems. Following recent trends in GPU power consumption and in GPU popularity for both gaming and AI workloads, the company has been selling a range of “mobile supercomputers”, which are practically large, tailor-made laptops that rival even the best of desktop computers – and have the power requirements to match.
In order to power their own high-end laptops, Eurocom has developed a rather unusual 780W AC adapter for high-performance laptops. Intended to power the most powerful of powerful laptops, their latest power adapter can supply over three-quarters of a kilowatt of DC power; a desktop-like wattage for a desktop-like device. It’s a laptop power adapter that’s unique in its scale, and with some room to scale up even further.
Eurocom 780W AC Adapter Power specifications |
|
RAIL | +20V |
MAX OUTPUT | 39A |
780W | |
DIMENSIONS | 325 x 110 x 40mm, 1.38kg |
AC INPUT | 90 - 264 VAC, 50 - 60 Hz |
MSRP | PSU: $299, Cable: $129+ |
Eurocom has offered this adapter for a few years now. But it’s taken on a new life as more laptops have been released that use such a large adapter (or two smaller 300W-ish power adapters). And, critically, Eurocom has made additional DC cable/connector sets for other manufactuers’ laptops, significantly increasing the number of systems it’s compatible with. Currently, Eurocom offers DC cables for the following laptops:
Externally, the Eurocom 780W AC adapter looks like a hybrid between a PC PSU and a laptop adapter. It is shaped like an oversized laptop AC adapter but has a metallic body and cooling vents/fans. Eurocom advertises that it weighs “only 1.34kg”, which is the weight of an entire 14-inch laptop by comparison, but is relatively lightweight for a PSU with that kind of power output. The company logo is debossed across the top side of the adapter, with four round anti-slip pads found at its bottom. Care needs to be taken while moving it around because the metallic edges and weight can spell trouble for wooden surfaces.
Eurocom installed a small information display on the AC adapter, which shows the current, voltage, and power output, as well as the temperature of the adapter. It is not perfectly accur... Cases/Cooling/PSUs
Seagate Technology has reportedly notified its customers abouts its plans to raise prices on new hard drive orders and for demands that exceed prior agreements, echoing a similar move by Western Digital, which increased its prices earlier this month. These changes come in response to a surge in demand for high-capacity HDDs and constraints in supply due to decreased production capabilities of both Seagate and Western Digital, reports TrendForce.
According to industry insights reported by TechNews, the sector anticipates that the scarcity of high-capacity HDD products will persist throughout the current quarter and possibly extend over the entire year. It is forecasted that HDD prices will rise by 5% to 10% in Q2 2024 alone and could increase further as a reault of the ongoing challenges faced by the storage industry.
The primary driver behind Seagate's decision is increased demand for high-capacity HDDs, which are used to train AI models. This demand spike, coupled with a reduction in production output from hard drive makers, has created a significant supply-demand imbalance. As a result, Seagate has decided to adjust their pricing strategy to manage the situation. Further exacerbating the issue are global inflationary pressures which continue to inflate costs across the board, which also contributed to the company's decision to increase prices, Seagate said in a message to clients published by TrendForce.
Seagate emphasized that its reduced production capacity has been a major challenge, hindering the company's ability to fulfill customer demands fully and promptly.
"As a result, we will be implementing price increases effective immediately on new orders and for demand that is over and above previously committed volumes," the alleged memo from Seagate reads. "Supply constraints are expected to continue and as such we anticipate that prices will continue to increase in the coming quarters."
Earlier this month Western Digital also informed its customers about price hikes for its HDD and SSD products. This notification was based on similar issues — higher than anticipated demand across the whole product range and additional supply chain challenges affecting the electronics sector. Western Digital's announcement made it clear that these disruptions are likely to continue, prompting further price adjustments.
Sources: TrendForce, TrendForce, TechNews
StorageSamsung Electronics has started mass production of its 9th generation of V-NAND memory. The first dies based on their latest NAND tech come in a 1 Tb capacity using a triple-level cell (TLC) architecture, with data transfer rates as high as 3.2 GT/s. The new 3D TLC NAND memory will initially be used to build high-capacity and high-performance SSDs, which will help to solidify Samsung's position in the storage market.
Diving right in, Samsung is conspicuously avoiding to list the number of layers in their latest generation NAND, which is the principle driving factor in increasing capacity generation-on-generation. The company's current 8th gen V-NAND is 236 layers – similar to its major competitors – and word on the street is that 9th gen V-NAND ups that to 290 layers, though this remains to be confirmed.
Regardless, Samsung says that its 9th generation V-NAND memory boasts an approximate 50% improvement in bit density over its 8th generation predecessor. Driving this gains, the company cites the miniaturization of the cell size, as well as the integration of enhanced memory cell technologies that reduce interference and extend the lifespan of the cells. With their latest NAND technology, Samsung has also been able to eliminate dummy channel holes, thus reducing the planar area of the memory cells.
Interestingly, today's announcement also marks the first time that Samsung has publicly confirmed their use of string stacking in their NAND, referring to it as their "double-stack structure." The company is widely believed to have been using sting stacking back in their 8th generation NAND as well, however this was never confirmed by the company. Regardless, the use of string stacking is only going to increase from here, as vendors look to keep adding layers to their NAND dies, while manufacturing variability and channel hole tolerances make it difficult to produce more than 150-200 layers in a single stack.
Samsung TLC V- NAND Flash Memory | ||
9th Gen V-NAND | 8th Gen V-NAND | |
Layers | 290? | 236 |
Decks | 2 (x145) | 2 (x118) |
Die Capacity | 1 Tbit | 1 Tbit |
Die Size (mm2) | ?mm2 | ?mm2 |
Density (Gbit/mm2) | ? | ? |
I/O Speed | 3.2 GT/s (Toggle 5.1) |
2.4 GT/s (Toggle 5.0) |
Planes | 6? | 4 |
CuA / PuC | Yes | Yes |
Speaking of channel holes, another key technological enhancement in the 9th gen V-NAND is Samsung's advanced 'channel hole etching' technology. This process improves manufacturing productivity by enabling the simultaneous creation of electron pathways within a double-stack structure. This method is crucial as it enables efficient drilling through more layers, which is increasingly important as cell layers are added.
The latest V-NAND also features the introduction of a faster NAND flash interface, Toggle DDR 5.1, which boosts peak data transfer rates by 33% to 3.2 GT/s, or almost 400MB/sec for a single die. Additionally, 9th gen V-NAND's power consumption has been reduced by 10%, according to Samsung. Though Samsung doesn't state under what conditions – presumably, this is at iso-frequency rather than max frequency.
Samsung's launch of 1Tb TLC V-NAND is set to be followed by the release of a quad-level cell (QLC) model later this year.
"We are excited to deliver the industry’s first 9th-gen V-NAND which will bring future... SSDs
As Qualcomm prepares for the mid-year launch of their forthcoming Snapdragon X SoCs for PCs, and the eagerly anticipated Oryon CPU cores within, the company is finally shoring up their official product plans, and releasing some additional technical details in the process. Thus far the company has been demonstrating their Snapdragon X Elite SoC in its highest-performing, fully-enabled configuration. But the retail Snapdragon X Elite will not be a single part; instead, Qualcomm is preparing a whole range of chip configurations for various price/performance tiers in the market. Altogether, there will be 3 Snapdragon X Elite SKUs that differ in CPU and GPU performance.
As well, the company is introducing a second Snapdragon X tier, Snapdragon X Plus, for those SKUs positioned below the Elite performance tier. As of today, this will be a single configuration. But if the Snapdragon X lineup is successful and demand warrants it, I would not be surprised to see Qualcomm expand it further – as they have certainly left themselves the room for it in their product stack. In the meantime, with Qualcomm’s expected launch competition now shipping (Intel Core Ultra Meteor Lake and AMD Ryzen Mobile 8040 Hawk Point), the company is also very confident that even these reduced performance Snapdragon X Plus chips will be able to beat Intel and AMD in multithreaded performance – never mind the top-tier Snapdragon X Elite chips.
Qualcomm will be launching this expanded four chip stack at once; so both Snapdragon X Elite and Snapdragon X Plus tier devices should be available at the same time. The company’s goal is still to have devices on the shelf “mid-year”, although the company isn’t providing any more precise guidance than that. With Qualcomm’s CEO, Cristiano Amon, set to deliver a Computex keynote in June, I expect we’ll get more specific details on timings then, along with the company and its partners using the event to announce and showcase some retail laptop designs. So this is very much looking like a summer launch at the moment.
In the meantime, Qualcomm is already showing off what their Snapdragon X Plus chips can do with a fresh set of live benchmarks, akin to their Snapdragon X Elite performance previews from October 2023. We’ll dive into those in a bit, but suffice it to say, Qualcomm knows the score, and they want to make sure the entire world knows when they’re winning.
CPUsAs Qualcomm prepares for the mid-year launch of their forthcoming Snapdragon X SoCs for PCs, and the eagerly anticipated Oryon CPU cores within, the company is finally shoring up their official product plans, and releasing some additional technical details in the process. Thus far the company has been demonstrating their Snapdragon X Elite SoC in its highest-performing, fully-enabled configuration. But the retail Snapdragon X Elite will not be a single part; instead, Qualcomm is preparing a whole range of chip configurations for various price/performance tiers in the market. Altogether, there will be 3 Snapdragon X Elite SKUs that differ in CPU and GPU performance.
As well, the company is introducing a second Snapdragon X tier, Snapdragon X Plus, for those SKUs positioned below the Elite performance tier. As of today, this will be a single configuration. But if the Snapdragon X lineup is successful and demand warrants it, I would not be surprised to see Qualcomm expand it further – as they have certainly left themselves the room for it in their product stack. In the meantime, with Qualcomm’s expected launch competition now shipping (Intel Core Ultra Meteor Lake and AMD Ryzen Mobile 8040 Hawk Point), the company is also very confident that even these reduced performance Snapdragon X Plus chips will be able to beat Intel and AMD in multithreaded performance – never mind the top-tier Snapdragon X Elite chips.
Qualcomm will be launching this expanded four chip stack at once; so both Snapdragon X Elite and Snapdragon X Plus tier devices should be available at the same time. The company’s goal is still to have devices on the shelf “mid-year”, although the company isn’t providing any more precise guidance than that. With Qualcomm’s CEO, Cristiano Amon, set to deliver a Computex keynote in June, I expect we’ll get more specific details on timings then, along with the company and its partners using the event to announce and showcase some retail laptop designs. So this is very much looking like a summer launch at the moment.
In the meantime, Qualcomm is already showing off what their Snapdragon X Plus chips can do with a fresh set of live benchmarks, akin to their Snapdragon X Elite performance previews from October 2023. We’ll dive into those in a bit, but suffice it to say, Qualcomm knows the score, and they want to make sure the entire world knows when they’re winning.
CPUsAs Seagate ramps up shipments of its new heat assisted magnetic recording (HAMR)-based Mozaic 3+ hard drive platform, the company is both in the enviable position of shipping the first major new hard drive technology in a decade, and the much less enviable position of proving the reliability of the first major new hard drive technology in a decade. Due to HAMR's use of temporal heating with its platters, as well as all-new read/write heads, HAMR introduces multiple new changes at once that have raise questions about how reliable the technology will be. Looking to address these matters (and further promote their HAMR drives), Seagate has published a fresh blog post outlining the company's R&D efforts, and why the company expects their HAMR drives to last several years – as long or longer than current PMR hard drives.
According to the company, the reliability of Mozaic 3+ drives on par with traditional drives relying on perpendicular magnetic recording (PMR), the company says. In fact, components of HAMR HDDs have demonstrated a 50% increase in reliability over the past two years. Seagate says that Mozaic 3+ drives boast impressive durability metrics: their read/write heads have demonstrated capacity to handle over 3.2 petabytes of data transfer over 6,000 hours of operation, which exceeds data transfers of typical nearline hard drives by 20 times. Accordingly, Seagate is rating these drives for a mean time between failure (MTBF) 2.5 million hours, which is in-line with PMR-based drives.
Based on their field stress tests, involving over 500,000 Mozaic 3+ drives, Seagate says that the heads of Mozaic 3+ drives will last over seven years, surpassing the typical lifespan of current PMR-based drives. Generally, customers anticipate that modern PMR drives will last between four and five years with average usage, so these drives would exceed current expectations.
Altogether, Seagate is continuing aim for a seamless transition from PMR to HAMR drives in customer systems. That means ensuring that these new drives can fit into existing data center infrastructures without requiring any changes to enterprise specifications, warranty conditions, or form factors.
StorageWith the arrival of spring comes showers, flowers, and in the technology industry, TSMC's annual technology symposium series. With customers spread all around the world, the Taiwanese pure play foundry has adopted an interesting strategy for updating its customers on its fab plans, holding a series of symposiums from Silicon Valley to Shanghai. Kicking off the series every year – and giving us our first real look at TSMC's updated foundry plans for the coming years – is the Santa Clara stop, where yesterday the company has detailed several new technologies, ranging from more advanced lithography processes to massive, wafer-scale chip packing options.
Today we're publishing several stories based on TSMC's different offerings, starting with TSMC's marquee announcement: their A16 process node. Meanwhile, for the rest of our symposium stories, please be sure to check out the related reading below, and check back for additional stories.
Headlining its Silicon Valley stop, TSMC announced its first 'angstrom-class' process technology: A16. Following a production schedule shift that has seen backside power delivery network technology (BSPDN) removed from TSMC's N2P node, the new 1.6nm-class production node will now be the first process to introduce BSPDN to TSMC's chipmaking repertoire. With the addition of backside power capabilities and other improvements, TSMC expects A16 to offer significantly improved performance and energy efficiency compared to TSMC's N2P fabrication process. It will be available to TSMC's clients starting H2 2026.
At a high level, TSMC's A16 process technology will rely on gate-all-around (GAAFET) nanosheet transistors and will feature a backside power rail, which will both improve power delivery and moderately increase transistor density. Compared to TSMC's N2P fabrication process, A16 is expected to offer a performance improvement of 8% to 10% at the same voltage and complexity, or a 15% to 20% reduction in power consumption at the same frequency and transistor count. TSMC is not listing detailed density parameters this far out, but the company says that chip density will increase by 1.07x to 1.10x – keeping in mind that transistor density heavily depends on the type and libraries of transistors used.
The key innovation of TSMC's A16 node, is its Super Power Rail (SPR) backside power delivery network, a first for TSMC. The contract chipmaker claims that A16's SPR is specifically tailored for high-performance computing products that feature both complex signal routes and dense power circuitry.
As noted earlier, with this week's announcement, A16 has now become the launch vehicle for backside power delivery at TSMC. The company was initially slated to offer BSPDN technology with N2P in 2026, but for reasons that aren't entirely clear, the tech has been punted from N2P and moved to A16. TSMC's official timing for N2P in 2023 was always a bit loose, so it's hard to say if this represents much of a practical delay for BSPDN at TSMC. But at the same time, it's important to underscore that A16 isn't just N2P renamed, but rather it will be a di... Semiconductors
Seagate Technology has reportedly notified its customers abouts its plans to raise prices on new hard drive orders and for demands that exceed prior agreements, echoing a similar move by Western Digital, which increased its prices earlier this month. These changes come in response to a surge in demand for high-capacity HDDs and constraints in supply due to decreased production capabilities of both Seagate and Western Digital, reports TrendForce.
According to industry insights reported by TechNews, the sector anticipates that the scarcity of high-capacity HDD products will persist throughout the current quarter and possibly extend over the entire year. It is forecasted that HDD prices will rise by 5% to 10% in Q2 2024 alone and could increase further as a reault of the ongoing challenges faced by the storage industry.
The primary driver behind Seagate's decision is increased demand for high-capacity HDDs, which are used to train AI models. This demand spike, coupled with a reduction in production output from hard drive makers, has created a significant supply-demand imbalance. As a result, Seagate has decided to adjust their pricing strategy to manage the situation. Further exacerbating the issue are global inflationary pressures which continue to inflate costs across the board, which also contributed to the company's decision to increase prices, Seagate said in a message to clients published by TrendForce.
Seagate emphasized that its reduced production capacity has been a major challenge, hindering the company's ability to fulfill customer demands fully and promptly.
"As a result, we will be implementing price increases effective immediately on new orders and for demand that is over and above previously committed volumes," the alleged memo from Seagate reads. "Supply constraints are expected to continue and as such we anticipate that prices will continue to increase in the coming quarters."
Earlier this month Western Digital also informed its customers about price hikes for its HDD and SSD products. This notification was based on similar issues — higher than anticipated demand across the whole product range and additional supply chain challenges affecting the electronics sector. Western Digital's announcement made it clear that these disruptions are likely to continue, prompting further price adjustments.
Sources: TrendForce, TrendForce, TechNews
StorageSamsung Electronics this week was awarded up to $6.4 billion from the U.S. government under the CHIPS and Science Act to build its new fab complex in Taylor, Texas. This is the third major award under the act in the last month, with all three leading-edge fabs – Intel, TSMC, and now Samsung – receiving multi-billion dollar funding packages under the domestic chip production program. Overall, the final price tag on Samsung's new fab complex is expected to reach $40 billion by the time it's completed later this decade.
Samsung's CHIPS Act funding was announced during a celebratory event attended by U.S. Secretary of Commerce Gina Raimondo and Samsung Semiconductor chief executive Kye Hyun Kyung. During the event, Kyung outlined the strategic goals of the expansion, emphasizing that the additional funding will not only increase production capacity but also strengthen the entire local semiconductor ecosystem. Samsung plans to equip its fab near Taylor, Texas, with the latest wafer fab tools to produce advanced chips. The Financial Times reports that Samsung aims to produce semiconductors on its 2nm-class process technology starting 2026, though for now this is unofficial information.
"I am pleased to announce a preliminary agreement between Samsung and the Department of Commerce to bring Samsung's advanced semiconductor manufacturing and research and development to Texas," said Joe Biden, the U.S. president, in a statement. "This announcement will unleash over $40 billion in investment from Samsung, and cement central Texas's role as a state-of-the-art semiconductor ecosystem, creating at least 21,500 jobs and leveraging up to $40 million in CHIPS funding to train and develop the local workforce. These facilities will support the production of some of the most powerful chips in the world, which are essential to advanced technologies like artificial intelligence and will bolster U.S. national security."
Samsung has been a significant contributor to the Texas economy for decades, starting chip manufacturing in the U.S. in 1996. With previous investments totaling $18 billion in its Austin operations, Samsung's expansion into Taylor with an additional investment of at least $17 billion underscores its role as one of the largest foreign direct investors in U.S. history. The total expected investment in the new fab surpasses $40 billion, making it one of the largest for a greenfield project in the nation and transforming Taylor into a major hub for semiconductor manufacturing.
The CEO highlighted the substantial economic impact of Samsung's operations, noting a nearly double increase in regional economic output from $13.6 billion to $26.8 billion between 2022 and 2023. The ongoing expansion is projected to further stimulate economic growth, create thousands of jobs, and enhance the community's overall development.
“We are not just expanding production facilities; we’re strengthening the local semiconductor ecosystem and positioning the U.S. as a global semiconductor manufacturing destination.” said Kyung. “To meet the expected surge in demand from U.S. customers, for future products like AI chips, our fabs will be equipped for cutting-edge process technologies and help bring security to the U.S. semiconductor supply chain.”
Samsung is also committed to environmental sustainability and workforce development. The company plans to operate using 100% clean energy and incorporate advanced water management technologies. Additionally, it is investing in education and training programs to develop a new generation of semiconductor professionals. These initiatives include partnerships with educational institutions and programs tailored for military veterans.
In his remarks, Kyung expressed gratitude to President Biden, Secretary Raimondo, and other governmental and community supporters for their ongoing support. This collaborative effort between Samsung and various levels of government, as well as the local community, is pivotal in advancing America's... Semiconductors
AMD has recently expanded its Ryzen 8000 series by introducing the Ryzen 7 8700F and Ryzen 5 8400F processors. Initially launched in China, these chips were added to AMD's global website, signaling they are available worldwide, apparently from April 1st. Built from the recent Zen 4-based Phoenix APUs using the TSMC 4nm node as their Zen 4 mobile chips, these new CPUs lack integrated graphics. However, the Ryzen 7 8700F does include the integrated Ryzen AI NPU for added capabilities in a world currently dominated by AI and moving it directly into the PC.
The company's decision to announce these chips in China aligns with its strategy to offer Ryzen solutions at every price point in the market. Although AMD didn't initially disclose the full specifications of these F-series models, and we did reach out to the company to ask about them, they refused to discuss them with us. Their listing on the website has now been updated with a complete list of specifications and features, with everything but the price mentioned.
AMD Ryzen 8000G vs. Ryzen 8000F Series (Desktop) Zen 4 (Phoenix) |
|||||||||
AnandTech | Cores/Threads | Base Freq |
Turbo Freq |
GPU | GPU Freq |
Ryzen AI (NPU) |
L3 Cache (MB) |
TDP | MSRP |
Ryzen 7 | |||||||||
Ryzen 7 8700G | 8/16 | 4200 | 5100 | R780M 12 CUs |
2900 | Y | 16 | 65W | $329 |
Ryzen 7 8700F | 8/16 | 4100 | 5000 | - | - | Y | 16 | 65W | ? |
Ryzen 5 | |||||||||
Ryzen 5 8600G | 6/12 | 4300 | 5000 | R760M 8 CUs |
2800 | Y | 16 | 65W | $229 |
Ryzen 5 8400F | 6/12 | 4200 | 4700 | - | - | N | 16 | 65W | ? |
The Ryzen 7 8700F features an 8C/16T design, with 16MB of L3 cache and the same 65W TDP as the Ryzen 7 8700G. Although the base clock speed is 4.1 GHz, it boosts to 5.0 GHz; this is 100 MHz less on both base/boost clocks than the 8700G. Meanwhile, the Ryzen 5 8400F is a slightly scaled-down version of the Ryzen 8600G APU, with 6C/12, 16MB of L3 cache, and again has a 100 MHz reduction to base clocks compared to the 8600G. Unlike the Ryzen 5 8400F, the Ryzen 7 8700F keeps AMD's Ryzen AI NPU, adding additional capability for generative AI.
The Ryzen 5 8400F can boost up to 4.7 GHz, 300 MHz slower than the Ryzen 5 8600G. AMD also allows overclocking for these new F-series chips, which means users could potentially boost the performance of these processors to match their G-series equivalents.
Pricing details are still pending, but to remain competitive, AMD will likely need to price these CPUs below the 8700G and 8600G, as well as the Ryzen 7 7700 and Ryzen 5 7600. These CPUs offer, albeit very limited, integrated graphics and have double the L3 cache capacity, along with higher boost clocks than the 8000F series chips, so pricing is something to consider whenever pricing becomes available.
CPUsCorsair has introduced a family of registered memory modules with ECC that are designed for AMD's Ryzen Threadripper 7000 and Intel's Xeon W-2400/3400-series processors. The new Corsair WS DDR5 RDIMMs with AMD EXPO and Intel XMP 3.0 profiles will be available in kits of up to 256 GB capacity and at speeds of up to 6400 MT/s.
Corsair's family of WS DDR5 RDIMMs includes 16 GB modules operating at up to 6400 MT/s with CL32 latency as well as 32 GB modules functioning at 5600 MT/s with CL40 latency. At present, Corsair offers a quad-channel 64 GB kit (4×16GB, up to 6400 MT/s), a quad-channel 128GB kit (4×32GB, 5600 MT/s), an eight-channel 128 GB kit (8×16GB, 5600 MT/s), and an eight-channel 256 GB kit (8×32GB, 5600 MT/s) and it remains to be seen whether the company will expand the lineup.
Corsair's WS DDR5 RDIMMs are designed for AMD's TRX50 and WRX90 platforms as well as Intel's W790 platform and are therefore compatible with AMD's Ryzen Threadripper Pro 7000 and 7000WX-series as well as Intel's Xeon W-2400/3400-series CPUs. The modules feature both AMD EXPO and Intel XMP 3.0 profiles to easily set their beyond-JEDEC-spec settings and come with thin heat spreaders made of pyrolytic graphite sheet (PGS), which thermal conductivity than that of copper and aluminum of the same thickness. For now, Corsair does not disclose which RCD and memory chips its registered memory modules use.
Unlike many of its rivals among leading DIMM manufacturers, Corsair did not introduce its enthusiast-grade RDIMMs when AMD and Intel released their Ryzen Threadripper and Xeon W-series platforms for extreme workstations last year. It is hard to tell what the reason for that is, but perhaps the company wanted to gain experience working with modules featuring registered clock drivers (RCDs) as well as AMD's and Intel's platforms for extreme workstations.
The result of the delay looks to be quite rewarding: unlike modules from its competitors that either feature AMD EXPO or Intel XMP 3.0 profiles, Corsair's WS DDR5 RDIMMs come with both. While this may not be important on the DIY market where people know exactly what they are buying for their platform, this is a great feature for system integrators, which can use Corsair WS DDR5 RDIMMs both for their AMD Ryzen Threadripper and Intel Xeon W-series builds, something that greatly simplifies their inventory management.
Since Corsair's WS DDR5 RDIMMs are aimed at workstations and are tested to offer reliable performance beyond JEDEC specifications, they are quite expensive. The cheapest 64 GB DDR5-5600 CL40 kit costs $450, the fastest 64 GB DDR5-6400 CL32 kit is priced at $460, whereas the highest end 256 GB DDR5-5600 CL40 kit is priced at $1,290.
MemoryAcross the internet, from online forums such as Reddit to various other tech media outlets, there's a lot of furor around reports of Intel's top-end 14th and 13th Gen K series of processors running into stability issues. As Intel's flagship chips, these parts come aggressively clocked in order to maximize performance through various implementations of boost and turbo, leaving them running close to their limits out of the box. But with high-end motherboards further goosing these chips to wring even more performance out of them, it would seem that the Intel desktop ecosystem has finally reached a tipping point where all of these efforts to boost performance have pushed these flagship chips to unstable conditions. To that end, Intel has released new gudiance to its consumer motherboard partners, strongly encouraging them to actually implment Intel's stock power settings, and to use those baseline settings as their out-of-the-box default.
While the underlying conditions are nothing new – we've published stories time and time again about motherboard features such as multi-core enhancement (MCE) and raised power consumption limits that seek to maximize how hard and how long systems are able to turbo boost – the issue has finally come to a head in the last couple of months thanks to accumulating reports of system instability with Intel's 13900K and 14900K processors. These instability problems are eventually solved by either tamping down on these motherboard performance-boosting features – bringing the chips back down to something closer to Intel's official operating parameters – or downclocking the chips entirely.
Intel first began publicly investigating the matter on the 27th of February, when Intel's Communications Manager, Thomas Hannaford, posted a thread on Intel's Community Product Support Forms titled "Regarding Reports of 13th/14th Gen Unlocked Desktop Users Experiencing Stability Issues". In this thread, Thomas Hannaford said, "Intel is aware of reports regarding Intel Core 13th and 14th Gen unlocked desktop processors experiencing issues with certain workloads. We're engaged with our partners and are conducting analysis of the reported issues. If you are experiencing these issues, please reach out to Intel Customer Support for further assistance in the interim."
Since that post went up, additional reports have been circulating about instability issues across various online forums and message boards. The underlying culprit has been theorized to be motherboards implementing an array of strategies to improve chip performance, including aggressive multi-core enhancement settings, "unlimited" PL2 turbo, and reduced load line calibration settings. At no point do any of these settings overclock a CPU and push it to a higher clockspeed than it's validated for, but these settings do everything possible to keep a chip at the highest clockspeed possible at all times – and in the process seem to have gone a step too far.
From "Why Intel Processors Draw More Power Than Expected: TDP and Turbo Explained"
We wrote a piece initially covering multi-core enhancement in 2012, detailing how motherboard manufacturers try to stay competitive with each other and leverage any headroom within the silicon to output the highest performance levels. And more recently, we've talked about how desktop systems with Intel chips are now regularly exceeding their rated TDPs – sometimes by extreme amounts – as motherboard vendors continue to push them to run as hard as possible for the best performance.
But things have changed since 2012. At the time, this wasn't so much of an issue, as overclocking was actually very favorable to increasing the performance of processors. But in 2024 with chips such as the Intel Core i9-14900K, we have CPUs shipping with a maximum turbo clock speed of 6.0 GHz and a peak power consumption of over 400 Watts, figures that were only a pipe dream a decade ago.
Jumping to the present time, over the weekend Intel released a statement about the matter to its partners, outlining their investigation so far and their sug... CPUs
Optical connectivity – and especially silicon photonics – is expected to become a crucial technology to enable connectivity for next-generation datacenters, particularly those designed HPC applications. With ever-increasing bandwidth requirements needed to keep up with (and keep scaling out) system performance, copper signaling alone won't be enough to keep up. To that end, several companies are developing silicon photonics solutions, including fab providers like TSMC, who this week outlined its 3D Optical Engine roadmap as part of its 2024 North American Technology Symposium, laying out its plan to bring up to 12.8 Tbps optical connectivity to TSMC-fabbed processors.
TSMC's Compact Universal Photonic Engine (COUPE) stacks an electronics integrated circuit on photonic integrated circuit (EIC-on-PIC) using the company's SoIC-X packaging technology. The foundry says that usage of its SoIC-X enables the lowest impedance at the die-to-die interface and therefore the highest energy efficiency. The EIC itself is produced at a 65nm-class process technology.
TSMC's 1st Generation 3D Optical Engine (or COUPE) will be integrated into an OSFP pluggable device running at 1.6 Tbps. That's a transfer rate well ahead of current copper Ethernet standards – which top out at 800 Gbps – underscoring the immediate bandwidth advantage of optical interconnects for heavily-networked compute clusters, never mind the expected power savings.
Looking further ahead, the 2nd Generation of COUPE is designed to integrate into CoWoS packaging as co-packaged optics with a switch, allowing optical interconnections to be brought to the motherboard level. This version COUPE will support data transfer rates of up to 6.40 Tbps with reduced latency compared to the first version.
TSMC's third iteration of COUPE – COUPE running on a CoWoS interposer – is projected to improve on things one step further, increasing transfer rates to 12.8 Tbps while bringing optical connectivity even closer to the processor itself. At present, COUPE-on-CoWoS is in the pathfinding stage of development and TSMC does not have a target date set.
Ultimately, unlike many of its industry peers, TSMC has not participated in the silicon photonics market up until now, leaving this to players like GlobalFoundries. But with its 3D Optical Engine Strategy, the company will enter this important market as it looks to make up for lost time.
As Seagate ramps up shipments of its new heat assisted magnetic recording (HAMR)-based Mozaic 3+ hard drive platform, the company is both in the enviable position of shipping the first major new hard drive technology in a decade, and the much less enviable position of proving the reliability of the first major new hard drive technology in a decade. Due to HAMR's use of temporal heating with its platters, as well as all-new read/write heads, HAMR introduces multiple new changes at once that have raise questions about how reliable the technology will be. Looking to address these matters (and further promote their HAMR drives), Seagate has published a fresh blog post outlining the company's R&D efforts, and why the company expects their HAMR drives to last several years – as long or longer than current PMR hard drives.
According to the company, the reliability of Mozaic 3+ drives on par with traditional drives relying on perpendicular magnetic recording (PMR), the company says. In fact, components of HAMR HDDs have demonstrated a 50% increase in reliability over the past two years. Seagate says that Mozaic 3+ drives boast impressive durability metrics: their read/write heads have demonstrated capacity to handle over 3.2 petabytes of data transfer over 6,000 hours of operation, which exceeds data transfers of typical nearline hard drives by 20 times. Accordingly, Seagate is rating these drives for a mean time between failure (MTBF) 2.5 million hours, which is in-line with PMR-based drives.
Based on their field stress tests, involving over 500,000 Mozaic 3+ drives, Seagate says that the heads of Mozaic 3+ drives will last over seven years, surpassing the typical lifespan of current PMR-based drives. Generally, customers anticipate that modern PMR drives will last between four and five years with average usage, so these drives would exceed current expectations.
Altogether, Seagate is continuing aim for a seamless transition from PMR to HAMR drives in customer systems. That means ensuring that these new drives can fit into existing data center infrastructures without requiring any changes to enterprise specifications, warranty conditions, or form factors.
StorageOptical connectivity – and especially silicon photonics – is expected to become a crucial technology to enable connectivity for next-generation datacenters, particularly those designed HPC applications. With ever-increasing bandwidth requirements needed to keep up with (and keep scaling out) system performance, copper signaling alone won't be enough to keep up. To that end, several companies are developing silicon photonics solutions, including fab providers like TSMC, who this week outlined its 3D Optical Engine roadmap as part of its 2024 North American Technology Symposium, laying out its plan to bring up to 12.8 Tbps optical connectivity to TSMC-fabbed processors.
TSMC's Compact Universal Photonic Engine (COUPE) stacks an electronics integrated circuit on photonic integrated circuit (EIC-on-PIC) using the company's SoIC-X packaging technology. The foundry says that usage of its SoIC-X enables the lowest impedance at the die-to-die interface and therefore the highest energy efficiency. The EIC itself is produced at a 65nm-class process technology.
TSMC's 1st Generation 3D Optical Engine (or COUPE) will be integrated into an OSFP pluggable device running at 1.6 Tbps. That's a transfer rate well ahead of current copper Ethernet standards – which top out at 800 Gbps – underscoring the immediate bandwidth advantage of optical interconnects for heavily-networked compute clusters, never mind the expected power savings.
Looking further ahead, the 2nd Generation of COUPE is designed to integrate into CoWoS packaging as co-packaged optics with a switch, allowing optical interconnections to be brought to the motherboard level. This version COUPE will support data transfer rates of up to 6.40 Tbps with reduced latency compared to the first version.
TSMC's third iteration of COUPE – COUPE running on a CoWoS interposer – is projected to improve on things one step further, increasing transfer rates to 12.8 Tbps while bringing optical connectivity even closer to the processor itself. At present, COUPE-on-CoWoS is in the pathfinding stage of development and TSMC does not have a target date set.
Ultimately, unlike many of its industry peers, TSMC has not participated in the silicon photonics market up until now, leaving this to players like GlobalFoundries. But with its 3D Optical Engine Strategy, the company will enter this important market as it looks to make up for lost time.
Samsung today has announced that they have developed an even faster generation of LPDDR5X memory that is set to top out at LPDDR5X-10700 speeds. The updated memory is slated to offer 25% better performance and 30% greater capacity compared to existing mobile DRAM devices from the company. The new chips also appear to be tangibly faster than Micron's LPDDR5X memory and SK hynix's LPDDR5T chips.
Samsung's forthcoming LPDDR5X devices feature a data transfer rate of 10.7 GT/s as well as maximum capacity per stack of 32 GB. This allows Samsung's clients to equip their latest smartphones or laptops with 32 GB of low-power memory using just one DRAM package, which greatly simplifies their designs. Samsung says that 32 GB of memory will be particularly beneficial for on-device AI applications.
Samsung is using its latest-generation 12nm-class DRAM process technology to make its LPDDR5X-10700 devices, which allows the company to achieve the smallest LPDDR device size in the industry, the memory maker said.
In terms of power efficiency, Samsung claims that they have integrated multiple new power-saving features into the new LPDDR5X devices. These include an optimized power variation system that adjusts energy consumption based on workload, and expanded intervals for low-power mode that extend the periods of energy saving. These innovations collectively enhance power efficiency by 25% compared to earlier versions, benefiting mobile platforms by extending battery life, the company said.
“As demand for low-power, high-performance memory increases, LPDDR DRAM is expected to expand its applications from mainly mobile to other areas that traditionally require higher performance and reliability such as PCs, accelerators, servers and automobiles,” said YongCheol Bae, Executive Vice President of Memory Product Planning of the Memory Business at Samsung Electronics. “Samsung will continue to innovate and deliver optimized products for the upcoming on-device AI era through close collaboration with customers.”
Samsung plans to initiate mass production of the 10.7 GT/s LPDDR5X DRAM in the second half of this year. This follows a series of compatibility tests with mobile application processors and device manufacturers to ensure seamless integration into future products.
DRAMWith the arrival of spring comes showers, flowers, and in the technology industry, TSMC's annual technology symposium series. With customers spread all around the world, the Taiwanese pure play foundry has adopted an interesting strategy for updating its customers on its fab plans, holding a series of symposiums from Silicon Valley to Shanghai. Kicking off the series every year – and giving us our first real look at TSMC's updated foundry plans for the coming years – is the Santa Clara stop, where yesterday the company has detailed several new technologies, ranging from more advanced lithography processes to massive, wafer-scale chip packing options.
Today we're publishing several stories based on TSMC's different offerings, starting with TSMC's marquee announcement: their A16 process node. Meanwhile, for the rest of our symposium stories, please be sure to check out the related reading below, and check back for additional stories.
Headlining its Silicon Valley stop, TSMC announced its first 'angstrom-class' process technology: A16. Following a production schedule shift that has seen backside power delivery network technology (BSPDN) removed from TSMC's N2P node, the new 1.6nm-class production node will now be the first process to introduce BSPDN to TSMC's chipmaking repertoire. With the addition of backside power capabilities and other improvements, TSMC expects A16 to offer significantly improved performance and energy efficiency compared to TSMC's N2P fabrication process. It will be available to TSMC's clients starting H2 2026.
At a high level, TSMC's A16 process technology will rely on gate-all-around (GAAFET) nanosheet transistors and will feature a backside power rail, which will both improve power delivery and moderately increase transistor density. Compared to TSMC's N2P fabrication process, A16 is expected to offer a performance improvement of 8% to 10% at the same voltage and complexity, or a 15% to 20% reduction in power consumption at the same frequency and transistor count. TSMC is not listing detailed density parameters this far out, but the company says that chip density will increase by 1.07x to 1.10x – keeping in mind that transistor density heavily depends on the type and libraries of transistors used.
The key innovation of TSMC's A16 node, is its Super Power Rail (SPR) backside power delivery network, a first for TSMC. The contract chipmaker claims that A16's SPR is specifically tailored for high-performance computing products that feature both complex signal routes and dense power circuitry.
As noted earlier, with this week's announcement, A16 has now become the launch vehicle for backside power delivery at TSMC. The company was initially slated to offer BSPDN technology with N2P in 2026, but for reasons that aren't entirely clear, the tech has been punted from N2P and moved to A16. TSMC's official timing for N2P in 2023 was always a bit loose, so it's hard to say if this represents much of a practical delay for BSPDN at TSMC. But at the same time, it's important to underscore that A16 isn't just N2P renamed, but rather it will be a distinct technology from N2P.
TSMC is not the only fab pursuing backside power delivery, and accordingly, we're seeing multiple variations on the technique crop up at different fabs. The... Semiconductors
As Qualcomm prepares for the mid-year launch of their forthcoming Snapdragon X SoCs for PCs, and the eagerly anticipated Oryon CPU cores within, the company is finally shoring up their official product plans, and releasing some additional technical details in the process. Thus far the company has been demonstrating their Snapdragon X Elite SoC in its highest-performing, fully-enabled configuration. But the retail Snapdragon X Elite will not be a single part; instead, Qualcomm is preparing a whole range of chip configurations for various price/performance tiers in the market. Altogether, there will be 3 Snapdragon X Elite SKUs that differ in CPU and GPU performance.
As well, the company is introducing a second Snapdragon X tier, Snapdragon X Plus, for those SKUs positioned below the Elite performance tier. As of today, this will be a single configuration. But if the Snapdragon X lineup is successful and demand warrants it, I would not be surprised to see Qualcomm expand it further – as they have certainly left themselves the room for it in their product stack. In the meantime, with Qualcomm’s expected launch competition now shipping (Intel Core Ultra Meteor Lake and AMD Ryzen Mobile 8040 Hawk Point), the company is also very confident that even these reduced performance Snapdragon X Plus chips will be able to beat Intel and AMD in multithreaded performance – never mind the top-tier Snapdragon X Elite chips.
Qualcomm will be launching this expanded four chip stack at once; so both Snapdragon X Elite and Snapdragon X Plus tier devices should be available at the same time. The company’s goal is still to have devices on the shelf “mid-year”, although the company isn’t providing any more precise guidance than that. With Qualcomm’s CEO, Cristiano Amon, set to deliver a Computex keynote in June, I expect we’ll get more specific details on timings then, along with the company and its partners using the event to announce and showcase some retail laptop designs. So this is very much looking like a summer launch at the moment.
In the meantime, Qualcomm is already showing off what their Snapdragon X Plus chips can do with a fresh set of live benchmarks, akin to their Snapdragon X Elite performance previews from October 2023. We’ll dive into those in a bit, but suffice it to say, Qualcomm knows the score, and they want to make sure the entire world knows when they’re winning.
CPUsWhile desktop PC power supplies receive the bulk of enthusiasts’ attention for good reasons – not the least of which being the vast selection of options that comes from being a standardized part – power supplies as a whole aren’t just a concern for big PCs. With the majority of PC sales having flipped to notebook sales some years ago, notebook power supplies already lead the market in volume. And while most laptops don’t need the kind of massively powered PSUs that dominate the desktop space, even that is slowly changing as desktop replacement-class laptops increasingly grow in performance and power consumption.
Perhaps the poster-child for the high-performance, high-powered laptop is Eurocom, who has a long history of developing heavy-duty DTR laptops that are all but portable desktop systems. Following recent trends in GPU power consumption and in GPU popularity for both gaming and AI workloads, the company has been selling a range of “mobile supercomputers”, which are practically large, tailor-made laptops that rival even the best of desktop computers – and have the power requirements to match.
In order to power their own high-end laptops, Eurocom has developed a rather unusual 780W AC adapter for high-performance laptops. Intended to power the most powerful of powerful laptops, their latest power adapter can supply over three-quarters of a kilowatt of DC power; a desktop-like wattage for a desktop-like device. It’s a laptop power adapter that’s unique in its scale, and with some room to scale up even further.
Eurocom 780W AC Adapter Power specifications |
|
RAIL | +20V |
MAX OUTPUT | 39A |
780W | |
DIMENSIONS | 325 x 110 x 40mm, 1.38kg |
AC INPUT | 90 - 264 VAC, 50 - 60 Hz |
MSRP | PSU: $299, Cable: $129+ |
Eurocom has offered this adapter for a few years now. But it’s taken on a new life as more laptops have been released that use such a large adapter (or two smaller 300W-ish power adapters). And, critically, Eurocom has made additional DC cable/connector sets for other manufactuers’ laptops, significantly increasing the number of systems it’s compatible with. Currently, Eurocom offers DC cables for the following laptops:
Externally, the Eurocom 780W AC adapter looks like a hybrid between a PC PSU and a laptop adapter. It is shaped like an oversized laptop AC adapter but has a metallic body and cooling vents/fans. Eurocom advertises that it weighs “only 1.34kg”, which is the weight of an entire 14-inch laptop by comparison, but is relatively lightweight for a PSU with that kind of power output. The company logo is debossed across the top side of the adapter, with four round anti-slip pads found at its bottom. Care needs to be taken while moving it around because the metallic edges and weight can spell trouble for wooden surfaces.
Eurocom installed a small information display on the AC adapter, which shows the current, voltage, and power output, as well as the temperature of the adapter. It is not perfectly accur... Cases/Cooling/PSUs
Samsung Electronics this week was awarded up to $6.4 billion from the U.S. government under the CHIPS and Science Act to build its new fab complex in Taylor, Texas. This is the third major award under the act in the last month, with all three leading-edge fabs – Intel, TSMC, and now Samsung – receiving multi-billion dollar funding packages under the domestic chip production program. Overall, the final price tag on Samsung's new fab complex is expected to reach $40 billion by the time it's completed later this decade.
Samsung's CHIPS Act funding was announced during a celebratory event attended by U.S. Secretary of Commerce Gina Raimondo and Samsung Semiconductor chief executive Kye Hyun Kyung. During the event, Kyung outlined the strategic goals of the expansion, emphasizing that the additional funding will not only increase production capacity but also strengthen the entire local semiconductor ecosystem. Samsung plans to equip its fab near Taylor, Texas, with the latest wafer fab tools to produce advanced chips. The Financial Times reports that Samsung aims to produce semiconductors on its 2nm-class process technology starting 2026, though for now this is unofficial information.
"I am pleased to announce a preliminary agreement between Samsung and the Department of Commerce to bring Samsung's advanced semiconductor manufacturing and research and development to Texas," said Joe Biden, the U.S. president, in a statement. "This announcement will unleash over $40 billion in investment from Samsung, and cement central Texas's role as a state-of-the-art semiconductor ecosystem, creating at least 21,500 jobs and leveraging up to $40 million in CHIPS funding to train and develop the local workforce. These facilities will support the production of some of the most powerful chips in the world, which are essential to advanced technologies like artificial intelligence and will bolster U.S. national security."
Samsung has been a significant contributor to the Texas economy for decades, starting chip manufacturing in the U.S. in 1996. With previous investments totaling $18 billion in its Austin operations, Samsung's expansion into Taylor with an additional investment of at least $17 billion underscores its role as one of the largest foreign direct investors in U.S. history. The total expected investment in the new fab surpasses $40 billion, making it one of the largest for a greenfield project in the nation and transforming Taylor into a major hub for semiconductor manufacturing.
The CEO highlighted the substantial economic impact of Samsung's operations, noting a nearly double increase in regional economic output from $13.6 billion to $26.8 billion between 2022 and 2023. The ongoing expansion is projected to further stimulate economic growth, create thousands of jobs, and enhance the community's overall development.
“We are not just expanding production facilities; we’re strengthening the local semiconductor ecosystem and positioning the U.S. as a global semiconductor manufacturing destination.” said Kyung. “To meet the expected surge in demand from U.S. customers, for future products like AI chips, our fabs will be equipped for cutting-edge process technologies and help bring security to the U.S. semiconductor supply chain.”
Samsung is also committed to environmental sustainability and workforce development. The company plans to operate using 100% clean energy and incorporate advanced water management technologies. Additionally, it is investing in education and training programs to develop a new generation of semiconductor professionals. These initiatives include partnerships with educational institutions and programs tailored for military veterans.
In his remarks, Kyung expressed gratitude to President Biden, Secretary Raimondo, and other governmental and community supporters for their ongoing support. This collaborative effort between Samsung and various levels of government, as well as the local community, is pivotal in advancing America's... Semiconductors
Corsair has introduced a family of registered memory modules with ECC that are designed for AMD's Ryzen Threadripper 7000 and Intel's Xeon W-2400/3400-series processors. The new Corsair WS DDR5 RDIMMs with AMD EXPO and Intel XMP 3.0 profiles will be available in kits of up to 256 GB capacity and at speeds of up to 6400 MT/s.
Corsair's family of WS DDR5 RDIMMs includes 16 GB modules operating at up to 6400 MT/s with CL32 latency as well as 32 GB modules functioning at 5600 MT/s with CL40 latency. At present, Corsair offers a quad-channel 64 GB kit (4×16GB, up to 6400 MT/s), a quad-channel 128GB kit (4×32GB, 5600 MT/s), an eight-channel 128 GB kit (8×16GB, 5600 MT/s), and an eight-channel 256 GB kit (8×32GB, 5600 MT/s) and it remains to be seen whether the company will expand the lineup.
Corsair's WS DDR5 RDIMMs are designed for AMD's TRX50 and WRX90 platforms as well as Intel's W790 platform and are therefore compatible with AMD's Ryzen Threadripper Pro 7000 and 7000WX-series as well as Intel's Xeon W-2400/3400-series CPUs. The modules feature both AMD EXPO and Intel XMP 3.0 profiles to easily set their beyond-JEDEC-spec settings and come with thin heat spreaders made of pyrolytic graphite sheet (PGS), which thermal conductivity than that of copper and aluminum of the same thickness. For now, Corsair does not disclose which RCD and memory chips its registered memory modules use.
Unlike many of its rivals among leading DIMM manufacturers, Corsair did not introduce its enthusiast-grade RDIMMs when AMD and Intel released their Ryzen Threadripper and Xeon W-series platforms for extreme workstations last year. It is hard to tell what the reason for that is, but perhaps the company wanted to gain experience working with modules featuring registered clock drivers (RCDs) as well as AMD's and Intel's platforms for extreme workstations.
The result of the delay looks to be quite rewarding: unlike modules from its competitors that either feature AMD EXPO or Intel XMP 3.0 profiles, Corsair's WS DDR5 RDIMMs come with both. While this may not be important on the DIY market where people know exactly what they are buying for their platform, this is a great feature for system integrators, which can use Corsair WS DDR5 RDIMMs both for their AMD Ryzen Threadripper and Intel Xeon W-series builds, something that greatly simplifies their inventory management.
Since Corsair's WS DDR5 RDIMMs are aimed at workstations and are tested to offer reliable performance beyond JEDEC specifications, they are quite expensive. The cheapest 64 GB DDR5-5600 CL40 kit costs $450, the fastest 64 GB DDR5-6400 CL32 kit is priced at $460, whereas the highest end 256 GB DDR5-5600 CL40 kit is priced at $1,290.
MemorySamsung Electronics has started mass production of its 9th generation of V-NAND memory. The first dies based on their latest NAND tech come in a 1 Tb capacity using a triple-level cell (TLC) architecture, with data transfer rates as high as 3.2 GT/s. The new 3D TLC NAND memory will initially be used to build high-capacity and high-performance SSDs, which will help to solidify Samsung's position in the storage market.
Diving right in, Samsung is conspicuously avoiding to list the number of layers in their latest generation NAND, which is the principle driving factor in increasing capacity generation-on-generation. The company's current 8th gen V-NAND is 236 layers – similar to its major competitors – and word on the street is that 9th gen V-NAND ups that to 290 layers, though this remains to be confirmed.
Regardless, Samsung says that its 9th generation V-NAND memory boasts an approximate 50% improvement in bit density over its 8th generation predecessor. Driving this gains, the company cites the miniaturization of the cell size, as well as the integration of enhanced memory cell technologies that reduce interference and extend the lifespan of the cells. With their latest NAND technology, Samsung has also been able to eliminate dummy channel holes, thus reducing the planar area of the memory cells.
Interestingly, today's announcement also marks the first time that Samsung has publicly confirmed their use of string stacking in their NAND, referring to it as their "double-stack structure." The company is widely believed to have been using sting stacking back in their 8th generation NAND as well, however this was never confirmed by the company. Regardless, the use of string stacking is only going to increase from here, as vendors look to keep adding layers to their NAND dies, while manufacturing variability and channel hole tolerances make it difficult to produce more than 150-200 layers in a single stack.
Samsung TLC V- NAND Flash Memory | ||
9th Gen V-NAND | 8th Gen V-NAND | |
Layers | 290? | 236 |
Decks | 2 (x145) | 2 (x118) |
Die Capacity | 1 Tbit | 1 Tbit |
Die Size (mm2) | ?mm2 | ?mm2 |
Density (Gbit/mm2) | ? | ? |
I/O Speed | 3.2 GT/s (Toggle 5.1) |
2.4 GT/s (Toggle 5.0) |
Planes | 6? | 4 |
CuA / PuC | Yes | Yes |
Speaking of channel holes, another key technological enhancement in the 9th gen V-NAND is Samsung's advanced 'channel hole etching' technology. This process improves manufacturing productivity by enabling the simultaneous creation of electron pathways within a double-stack structure. This method is crucial as it enables efficient drilling through more layers, which is increasingly important as cell layers are added.
The latest V-NAND also features the introduction of a faster NAND flash interface, Toggle DDR 5.1, which boosts peak data transfer rates by 33% to 3.2 GT/s, or almost 400MB/sec for a single die. Additionally, 9th gen V-NAND's power consumption has been reduced by 10%, according to Samsung. Though Samsung doesn't state under what conditions – presumably, this is at iso-frequency rather than max frequency.
Samsung's launch of 1Tb TLC V-NAND is set to be followed by the release of a quad-level cell (QLC) model later this year.
"We are excited to deliver the industry’s first 9th-gen V-NAND which will bring future... SSDs
In an unexpected move, Intel has announced plans to phase out the boxed versions of its enthusiasts-class 13th Generation Core 'Raptor Lake' processors. According to a product change notification (PCN) published by the company last month, Intel plans to stop shipping these desktop CPUs by late June. In its place will remain Intel's existing lineup of boxed 14th Generation Core processors, which are based on the same 'Raptor Lake' silicon and typically carry higher performance for similar prices.
Intel customers and distributors interested in getting boxed versions 13th Generation Core i5-13600K/KF, Core i7-13700K/KF, and Core i9-13900K/KF/KS 'Raptor Lake' processors with unlocked multiplier should place their orders by May 24, 2024. The company will ship these units by June 28, 2024. Meanwhile, the PCN does not mention any change to the availability of tray versions of these CPUs, which are sold to OEMs and wholesalers.
The impending discontinuation of Intel's boxed 13th Generation Core processors comes as the company's current 14th Generation product line, 'Raptor Lake Refresh' is largely a rehash of the same silicon at slightly higher clockspeeds. Case in point: all of the discontinued SKUs are based on Intel's B0 Raptor Lake silicon, which is still being used for their 14th Gen counterparts. So Intel has not discontinued producing any Raptor Lake silicon; only the number of retail SKUs is getting cut-down.
As outlined in our 14th Generation Core/Raptor Lake Refresh review, the 14th Gen chips largely make their 13th Gen counterparts redundant, offering better performance at every tier for the same list price. And with virtually all current generation motherboards supporting both generation of chips, apparently Intel feels there's little reason to keep around what's essentially older, slower SKUs of the same silicon.
Interestingly, the retirement of the enthusiast-class 13th Generation Core chips is coming before Intel discontinues their even older 12th Generation Core 'Alder Lake' processors. 12th Gen chips are still available to this day in both boxed and tray versions, and the Alder Lake silicon itself is still widely in use in multiple product families. So even though Alder Lake shares the same platform as Raptor Lake, the chips based on that silicon haven't been rendered redundant in the same way that 13th Gen Core chips have.
Ultimately, it would seem that Intel is intent on consolidating and simplifying its boxed retail chip offerings by retiring their near-duplicate SKUs. Which for PC buyers could present a minor opportunity for a deal, as retailers work to sell off their remaining 13th Gen enthusiast chips.
CPUsWhile neuromorphic computing remains under research for the time being, efforts into the field have continued to grow over the years, as have the capabilities of the specialty chips that have been developed for this research. Following those lines, this morning Intel and Sandia National Laboratories are celebrating the deployment of the Hala Point neuromorphic system, which the two believe is the highest capacity system in the world. With 1.15 billion neurons overall, Hala Point is the largest deployment yet for Intel’s Loihi 2 neuromorphic chip, which was first announced at the tail-end of 2021.
The Hala Point system incorporates 1152 Loihi 2 processors, each of which is capable of simulating a million neurons. As noted back at the time of Loihi 2’s launch, these chips are actually rather small – just 31 mm2 per chip with 2.3 billion transistors each, as they’re built on the Intel 4 process (one of the only other Intel chips to do so, besides Meteor Lake). As a result, the complete system is similarly petite, taking up just 6 rack units of space (or as Sandia likes to compare it to, about the size of a microwave), with a power consumption of 2.6 kW. Now that it’s online, Hala Point has dethroned the SpiNNaker system as the largest disclosed neuromorphic system, offering admittedly just a slightly larger number of neurons at less than 3% of the power consumption of the 100 kW British system.
A Single Loihi 2 Chip (31 mm2)
Hala Point will be replacing an older Intel neuromorphic system at Sandia, Pohoiki Springs, which is based on Intel’s first-generation Loihi chips. By comparison, Hala Point offers ten-times as many neurons, and upwards of 12x the performance overall,
Both neuromorphic systems have been procured by Sandia in order to advance the national lab’s research into neuromorphic computing, a computing paradigm that behaves like a brain. The central thought (if you’ll excuse the pun) is that by mimicking the wetware writing this article, neuromorphic chips can be used to solve problems that conventional processors cannot solve today, and that they can do so more efficiently as well.
Sandia, for its part, has said that it will be using the system to look at large-scale neuromorphic computing, with work operating on a scale well beyond Pohoiki Springs. With Hala Point offering a simulated neuron count very roughly on the level of complexity of an owl brain, the lab believes that a larger-scale system will finally enable them to properly exploit the properties of neuromorphic computing to solve real problems in fields such as device physics, computer architecture, computer science and informatics, moving well beyond the simple demonstrations initially achieved at a smaller scale.
One new focus from the lab, which in turn has caught Intel’s attention, is the applicability of neuromorphic computing towards AI inference. Because the neural networks themselves behind the current wave of AI systems are attempting to emulate the human brain, in a sense, there is an obvious degree of synergy with the brain-mimicking neuromorphic chips, even if the algorithms differ in some key respects. Still, with energy efficiency being one of the major benefits of neuromorphic computing, it’s pushed Intel to look into the matter further – and even build a second, Hala Point-sized system of their own.
According to Intel, in their research on Hala Point, the system has reached efficiencies as high as 15 TOPS-per-Watt at 8-bit precision, albeit while using 10:1 sparsity, making it more than competitive with current-generation commercial chips. As an added bonus to that efficiency, the neuromorphic systems don’t require extensive data processing and batching in advance, which is normally necessary to make efficient use of the high density ALU arrays in GPUs and GPU-like processors.
Perhaps the most interesting use case of all, however, is the potent... CPUs
While desktop PC power supplies receive the bulk of enthusiasts’ attention for good reasons – not the least of which being the vast selection of options that comes from being a standardized part – power supplies as a whole aren’t just a concern for big PCs. With the majority of PC sales having flipped to notebook sales some years ago, notebook power supplies already lead the market in volume. And while most laptops don’t need the kind of massively powered PSUs that dominate the desktop space, even that is slowly changing as desktop replacement-class laptops increasingly grow in performance and power consumption.
Perhaps the poster-child for the high-performance, high-powered laptop is Eurocom, who has a long history of developing heavy-duty DTR laptops that are all but portable desktop systems. Following recent trends in GPU power consumption and in GPU popularity for both gaming and AI workloads, the company has been selling a range of “mobile supercomputers”, which are practically large, tailor-made laptops that rival even the best of desktop computers – and have the power requirements to match.
In order to power their own high-end laptops, Eurocom has developed a rather unusual 780W AC adapter for high-performance laptops. Intended to power the most powerful of powerful laptops, their latest power adapter can supply over three-quarters of a kilowatt of DC power; a desktop-like wattage for a desktop-like device. It’s a laptop power adapter that’s unique in its scale, and with some room to scale up even further.
Eurocom 780W AC Adapter Power specifications |
|
RAIL | +20V |
MAX OUTPUT | 39A |
780W | |
DIMENSIONS | 325 x 110 x 40mm, 1.38kg |
AC INPUT | 90 - 264 VAC, 50 - 60 Hz |
MSRP | PSU: $299, Cable: $129+ |
Eurocom has offered this adapter for a few years now. But it’s taken on a new life as more laptops have been released that use such a large adapter (or two smaller 300W-ish power adapters). And, critically, Eurocom has made additional DC cable/connector sets for other manufactuers’ laptops, significantly increasing the number of systems it’s compatible with. Currently, Eurocom offers DC cables for the following laptops:
Externally, the Eurocom 780W AC adapter looks like a hybrid between a PC PSU and a laptop adapter. It is shaped like an oversized laptop AC adapter but has a metallic body and cooling vents/fans. Eurocom advertises that it weighs “only 1.34kg”, which is the weight of an entire 14-inch laptop by comparison, but is relatively lightweight for a PSU with that kind of power output. The company logo is debossed across the top side of the adapter, with four round anti-slip pads found at its bottom. Care needs to be taken while moving it around because the metallic edges and weight can spell trouble for wooden surfaces.
Eurocom installed a small information display on the AC adapter, which shows the current, voltage, and power output, as well as the temperature of the adapter. It is not perfectly accur... Cases/Cooling/PSUs
Taiwan Semiconductor Manufacturing Co. this week released its financial results for Q1 2024. Due to a rebound in demand for semiconductors, the company garned $18.87 billion in revenue for the quarter, which is up 12.9% year-over-year, but a decline of 3.8% quarter-over-quarter. The company says that in increase in demand for HPC processors (which includes processors for AI, PCs, and servers) drove its revenue rebound in Q1, but surprisingly, revenue share of TSMC's flagship N3 (3nm-class) process technology declined steeply quarter-over-quarter.
"Our business in the first quarter was impacted by smartphone seasonality, partially offset by continued HPC-related demand," said Wendell Huang, senior VP and chief financial officer of TSMC. "Moving into second quarter 2024, we expect our business to be supported by strong demand for our industry-leading 3nm and 5nm technologies, partially offset by continued smartphone seasonality."
In the first quarter of 2024, N3 wafer sales accounted for 9% of the foundry's revenue, down from 15% in Q4 2023, and up from 6% in Q3 2023. In terms of dollars, TSMC's 3nm production brought in around $1.698 billion, which is lower than $2.943 billion in the previous quarter. Meanwhile, TSMC's other advanced process technologies increased their revenue share: N5 (5 nm-class) accounted for 37% (up from 35%), and N7 (7 nm-class) commanded 19% (up from 17%). Though both remained relatively flat in terms of revenue, at $6.981 billion and $3.585 billion, respectively.
Generally, advanced technology nodes (N7, N5, N3) generated 65% of TSMC's revenue (down 2% from Q4 2023), while the broader category of FinFET-based process technologies contributed 74% to the company's total wafer revenue (down 1% from the previous quarter).
TSMC itself attributes the steep decline of N3's contribution to seasonally lower demand for smartphones in the first quarter as compared to the fourth quarter, which may indeed be the case as demand for iPhones typically slowdowns in Q1. Along those lines, there have also been reports about a drop in demand for the latest iPhones in China.
But even if A17 Pro production volumes are down, Apple remains TSMC's lead customer for N3B, as the fab also produces their M3, M3 Pro, and M3 Max processors on the same node. These SoCs are larger in terms of die sizes and resulting costs, so their contribution to TSMC's revenue should be quite substantial.
"Moving on to revenue contribution by platform. HPC increased 3% quarter-over-quarter to account for 46% of our first quarter revenue," said Huang. "Smartphone decreased 16% to account for 38%. IoT increased 5% to account for 6%. Automotive remained flat and accounted for 6%, and DCE increased 33% to account for 2%."
Meanwhile, as demand for AI and HPC processors will continue to increase in the coming years, TSMC expects its HPC platform to keep increasing its share in its revenue going forward.
"We expect several AI processors to be the strongest driver of our HPC platform growth and the largest contributor in terms of our overall incremental revenue growth in the next several years," said C.C. Wei, chief executive of TSMC.
AMD has recently expanded its Ryzen 8000 series by introducing the Ryzen 7 8700F and Ryzen 5 8400F processors. Initially launched in China, these chips were added to AMD's global website, signaling they are available worldwide, apparently from April 1st. Built from the recent Zen 4-based Phoenix APUs using the TSMC 4nm node as their Zen 4 mobile chips, these new CPUs lack integrated graphics. However, the Ryzen 7 8700F does include the integrated Ryzen AI NPU for added capabilities in a world currently dominated by AI and moving it directly into the PC.
The company's decision to announce these chips in China aligns with its strategy to offer Ryzen solutions at every price point in the market. Although AMD didn't initially disclose the full specifications of these F-series models, and we did reach out to the company to ask about them, they refused to discuss them with us. Their listing on the website has now been updated with a complete list of specifications and features, with everything but the price mentioned.
AMD Ryzen 8000G vs. Ryzen 8000F Series (Desktop) Zen 4 (Phoenix) |
|||||||||
AnandTech | Cores/Threads | Base Freq |
Turbo Freq |
GPU | GPU Freq |
Ryzen AI (NPU) |
L3 Cache (MB) |
TDP | MSRP |
Ryzen 7 | |||||||||
Ryzen 7 8700G | 8/16 | 4200 | 5100 | R780M 12 CUs |
2900 | Y | 16 | 65W | $329 |
Ryzen 7 8700F | 8/16 | 4100 | 5000 | - | - | Y | 16 | 65W | ? |
Ryzen 5 | |||||||||
Ryzen 5 8600G | 6/12 | 4300 | 5000 | R760M 8 CUs |
2800 | Y | 16 | 65W | $229 |
Ryzen 5 8400F | 6/12 | 4200 | 4700 | - | - | N | 16 | 65W | ? |
The Ryzen 7 8700F features an 8C/16T design, with 16MB of L3 cache and the same 65W TDP as the Ryzen 7 8700G. Although the base clock speed is 4.1 GHz, it boosts to 5.0 GHz; this is 100 MHz less on both base/boost clocks than the 8700G. Meanwhile, the Ryzen 5 8400F is a slightly scaled-down version of the Ryzen 8600G APU, with 6C/12, 16MB of L3 cache, and again has a 100 MHz reduction to base clocks compared to the 8600G. Unlike the Ryzen 5 8400F, the Ryzen 7 8700F keeps AMD's Ryzen AI NPU, adding additional capability for generative AI.
The Ryzen 5 8400F can boost up to 4.7 GHz, 300 MHz slower than the Ryzen 5 8600G. AMD also allows overclocking for these new F-series chips, which means users could potentially boost the performance of these processors to match their G-series equivalents.
Pricing details are still pending, but to remain competitive, AMD will likely need to price these CPUs below the 8700G and 8600G, as well as the Ryzen 7 7700 and Ryzen 5 7600. These CPUs offer, albeit very limited, integrated graphics and have double the L3 cache capacity, along with higher boost clocks than the 8000F series chips, so pricing is something to consider whenever pricing becomes available.
CPUsOptical connectivity – and especially silicon photonics – is expected to become a crucial technology to enable connectivity for next-generation datacenters, particularly those designed HPC applications. With ever-increasing bandwidth requirements needed to keep up with (and keep scaling out) system performance, copper signaling alone won't be enough to keep up. To that end, several companies are developing silicon photonics solutions, including fab providers like TSMC, who this week outlined its 3D Optical Engine roadmap as part of its 2024 North American Technology Symposium, laying out its plan to bring up to 12.8 Tbps optical connectivity to TSMC-fabbed processors.
TSMC's Compact Universal Photonic Engine (COUPE) stacks an electronics integrated circuit on photonic integrated circuit (EIC-on-PIC) using the company's SoIC-X packaging technology. The foundry says that usage of its SoIC-X enables the lowest impedance at the die-to-die interface and therefore the highest energy efficiency. The EIC itself is produced at a 65nm-class process technology.
TSMC's 1st Generation 3D Optical Engine (or COUPE) will be integrated into an OSFP pluggable device running at 1.6 Tbps. That's a transfer rate well ahead of current copper Ethernet standards – which top out at 800 Gbps – underscoring the immediate bandwidth advantage of optical interconnects for heavily-networked compute clusters, never mind the expected power savings.
Looking further ahead, the 2nd Generation of COUPE is designed to integrate into CoWoS packaging as co-packaged optics with a switch, allowing optical interconnections to be brought to the motherboard level. This version COUPE will support data transfer rates of up to 6.40 Tbps with reduced latency compared to the first version.
TSMC's third iteration of COUPE – COUPE running on a CoWoS interposer – is projected to improve on things one step further, increasing transfer rates to 12.8 Tbps while bringing optical connectivity even closer to the processor itself. At present, COUPE-on-CoWoS is in the pathfinding stage of development and TSMC does not have a target date set.
Ultimately, unlike many of its industry peers, TSMC has not participated in the silicon photonics market up until now, leaving this to players like GlobalFoundries. But with its 3D Optical Engine Strategy, the company will enter this important market as it looks to make up for lost time.
Intel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsStandard CPU coolers, while adequate for managing basic thermal loads, often fall short in terms of noise reduction and superior cooling efficiency. This limitation drives advanced users and system builders to seek aftermarket solutions tailored to their specific needs. The high-end aftermarket cooler market is highly competitive, with manufacturers striving to offer products with exceptional performance.
Endorfy, previously known as SilentiumPC, is a Polish manufacturer that has undergone a significant transformation to expand its presence in global markets. The brand is known for delivering high-performance cooling solutions with a strong focus on balancing efficiency and affordability. By rebranding as Endorfy, the company aims to enter premium market segments while continuing to offer reliable, high-quality cooling products.
SilentiumPC became very popular in the value/mainstream segments of the PC market with their products, the spearhead of which probably was the Fera 5 cooler that we reviewed a little over two years ago and had a remarkable value for money. Today’s review places Endorfy’s largest CPU cooler, the Fortis 5 Dual Fan, on our laboratory test bench. The Fortis 5 is the largest CPU air cooler the company currently offers and is significantly more expensive than the Fera 5, yet it still is a single-tower cooler that strives to strike a balance between value, compatibility, and performance.
Cases/Cooling/PSUsWhen Western Digital introduced its Ultrastar DC SN861 SSDs earlier this year, the company did not disclose which controller it used for these drives, which made many observers presume that WD was using an in-house controller. But a recent teardown of the drive shows that is not the case; instead, the company is using a controller from Fadu, a South Korean company founded in 2015 that specializes on enterprise-grade turnkey SSD solutions.
The Western Digital Ultrastar DC SN861 SSD is aimed at performance-hungry hyperscale datacenters and enterprise customers which are adopting PCIe Gen5 storage devices these days. And, as uncovered in photos from a recent Storage Review article, the drive is based on Fadu's FC5161 NVMe 2.0-compliant controller. The FC5161 utilizes 16 NAND channels supporting an ONFi 5.0 2400 MT/s interface, and features a combination of enterprise-grade capabilities (OCP Cloud Spec 2.0, SR-IOV, up to 512 name spaces for ZNS support, flexible data placement, NVMe-MI 1.2, advanced security, telemetry, power loss protection) not available on other off-the-shelf controllers – or on any previous Western Digital controllers.
The Ultrastar DC SN861 SSD offers sequential read speeds up to 13.7 GB/s as well as sequential write speeds up to 7.5 GB/s. As for random performance, it boasts with an up to 3.3 million random 4K read IOPS and up to 0.8 million random 4K write IOPS. The drives are available in capacities between 1.6 TB and 7.68 TB with one or three drive writes per day (DWPD) over five years rating as well as in U.2 and E1.S form-factors.
While the two form factors of the SN861 share a similar technical design, Western Digital has tailored each version for distinct workloads: the E1.S supports FDP and performance enhancements specifically for cloud environments. By contrast, the U.2 model is geared towards high-performance enterprise tasks and emerging applications like AI.
Without any doubts, Western Digital's Ultrastar DC SN861 is a feature-rich high-performance enterprise-grade SSD. It has another distinctive feature: a 5W idle power consumption, which is rather low by the standards of enterprise-grade drives (e.g., it is 1W lower compared to the SN840). While the difference with predecessors may be just 1W, hyperscalers deploy thousands of drives and for their TCO every watt counts.
Western Digital's Ultrastar DC SN861 SSDs are now available for purchase to select customers (such as Meta) and to interested parties. Prices are unknown, but they will depend on such factors as volumes.
Sources: Fadu, Storage Review
StorageIntel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsStandard CPU coolers, while adequate for managing basic thermal loads, often fall short in terms of noise reduction and superior cooling efficiency. This limitation drives advanced users and system builders to seek aftermarket solutions tailored to their specific needs. The high-end aftermarket cooler market is highly competitive, with manufacturers striving to offer products with exceptional performance.
Endorfy, previously known as SilentiumPC, is a Polish manufacturer that has undergone a significant transformation to expand its presence in global markets. The brand is known for delivering high-performance cooling solutions with a strong focus on balancing efficiency and affordability. By rebranding as Endorfy, the company aims to enter premium market segments while continuing to offer reliable, high-quality cooling products.
SilentiumPC became very popular in the value/mainstream segments of the PC market with their products, the spearhead of which probably was the Fera 5 cooler that we reviewed a little over two years ago and had a remarkable value for money. Today’s review places Endorfy’s largest CPU cooler, the Fortis 5 Dual Fan, on our laboratory test bench. The Fortis 5 is the largest CPU air cooler the company currently offers and is significantly more expensive than the Fera 5, yet it still is a single-tower cooler that strives to strike a balance between value, compatibility, and performance.
Cases/Cooling/PSUsWhen Western Digital introduced its Ultrastar DC SN861 SSDs earlier this year, the company did not disclose which controller it used for these drives, which made many observers presume that WD was using an in-house controller. But a recent teardown of the drive shows that is not the case; instead, the company is using a controller from Fadu, a South Korean company founded in 2015 that specializes on enterprise-grade turnkey SSD solutions.
The Western Digital Ultrastar DC SN861 SSD is aimed at performance-hungry hyperscale datacenters and enterprise customers which are adopting PCIe Gen5 storage devices these days. And, as uncovered in photos from a recent Storage Review article, the drive is based on Fadu's FC5161 NVMe 2.0-compliant controller. The FC5161 utilizes 16 NAND channels supporting an ONFi 5.0 2400 MT/s interface, and features a combination of enterprise-grade capabilities (OCP Cloud Spec 2.0, SR-IOV, up to 512 name spaces for ZNS support, flexible data placement, NVMe-MI 1.2, advanced security, telemetry, power loss protection) not available on other off-the-shelf controllers – or on any previous Western Digital controllers.
The Ultrastar DC SN861 SSD offers sequential read speeds up to 13.7 GB/s as well as sequential write speeds up to 7.5 GB/s. As for random performance, it boasts with an up to 3.3 million random 4K read IOPS and up to 0.8 million random 4K write IOPS. The drives are available in capacities between 1.6 TB and 7.68 TB with one or three drive writes per day (DWPD) over five years rating as well as in U.2 and E1.S form-factors.
While the two form factors of the SN861 share a similar technical design, Western Digital has tailored each version for distinct workloads: the E1.S supports FDP and performance enhancements specifically for cloud environments. By contrast, the U.2 model is geared towards high-performance enterprise tasks and emerging applications like AI.
Without any doubts, Western Digital's Ultrastar DC SN861 is a feature-rich high-performance enterprise-grade SSD. It has another distinctive feature: a 5W idle power consumption, which is rather low by the standards of enterprise-grade drives (e.g., it is 1W lower compared to the SN840). While the difference with predecessors may be just 1W, hyperscalers deploy thousands of drives and for their TCO every watt counts.
Western Digital's Ultrastar DC SN861 SSDs are now available for purchase to select customers (such as Meta) and to interested parties. Prices are unknown, but they will depend on such factors as volumes.
Sources: Fadu, Storage Review
StorageSamsung had quietly launched its BM1743 enterprise QLC SSD last month with a hefty 61.44 TB SKU. At FMS 2024, the company had the even larger 122.88 TB version of that SSD on display, alongside a few recorded benchmarking sessions. Compared to the previous generation, the BM1743 comes with a 4.1x improvement in I/O performance, improvement in data retention, and a 45% improvement in power efficiency for sequential writes.
The 128 TB-class QLC SSD boasts of sequential read speeds of 7.5 GBps and write speeds of 3 GBps. Random reads come in at 1.6 M IOPS, while 16 KB random writes clock in at 45K IOPS. Based on the quoted random write access granularity, it appears that Samsung is using a 16 KB indirection unit (IU) to optimize flash management. This is similar to the strategy adopted by Solidigm with IUs larger than 4K in their high-capacity SSDs.
A recorded benchmark session on the company's PM9D3a 8-channel Gen 5 SSD was also on display.
The SSD family is being promoted as a mainstream option for datacenters, and boasts of sequential reads up to 12 GBps and writes up to 6.8 GBps. Random reads clock in at 2 M IOPS, and random writes at 400 K IOPS.
Available in multiple form-factors up to 32 TB (M.2 tops out at 2 TB), the drive's firmware includes optional support for flexible data placement (FDP) to help address the write amplification aspect.
The PM1753 is the current enterprise SSD flagship in Samsung's lineup. With support for 16 NAND channels and capacities up to 32 TB, this U.2 / E3.S SSD has advertised sequential read and write speeds of 14.8 GBps and 11 GBps respectively. Random reads and writes for 4 KB accesses are listed at 3.4 M and 600 K IOPS.
Samsung claims a 1.7x performance improvement and a 1.7x power efficiency improvement over the previous generation (PM1743), making this TLC SSD suitable for AI servers.
The 9th Gen. V-NAND wafer was also available for viewing, though photography was prohibited. Mass production of this flash memory began in April 2024.
StorageA few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.
For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.
The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.
Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.
StorageIntel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.
The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.
In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.
Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.
Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.
Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.
CPUsStandard CPU coolers, while adequate for managing basic thermal loads, often fall short in terms of noise reduction and superior cooling efficiency. This limitation drives advanced users and system builders to seek aftermarket solutions tailored to their specific needs. The high-end aftermarket cooler market is highly competitive, with manufacturers striving to offer products with exceptional performance.
Endorfy, previously known as SilentiumPC, is a Polish manufacturer that has undergone a significant transformation to expand its presence in global markets. The brand is known for delivering high-performance cooling solutions with a strong focus on balancing efficiency and affordability. By rebranding as Endorfy, the company aims to enter premium market segments while continuing to offer reliable, high-quality cooling products.
SilentiumPC became very popular in the value/mainstream segments of the PC market with their products, the spearhead of which probably was the Fera 5 cooler that we reviewed a little over two years ago and had a remarkable value for money. Today’s review places Endorfy’s largest CPU cooler, the Fortis 5 Dual Fan, on our laboratory test bench. The Fortis 5 is the largest CPU air cooler the company currently offers and is significantly more expensive than the Fera 5, yet it still is a single-tower cooler that strives to strike a balance between value, compatibility, and performance.
Cases/Cooling/PSUs
Social Plugin