OLED panels have a number of advantages, including deep blacks, fast response times, and energy efficiency; most of these stemming from the fact that they do not need backlighting. However they also have drawbacks, as well, as trying to drive them to be as bright as a high-tier LCD will quickly wear out the organic material used. Researchers have been spending the past couple of decades developing ways to prolong the lifespans of OLED materials, and recently LG has put together a novel (if brute force) solution: halve the work by doubling the number of pixels. This is the basis of the company's new tandem OLED technology, which has recently gone into mass production.
The Tandem OLED technology introduced by LG Display uses two stacks of red, green, and blue (RGB) organic light-emitting layers, which are layered on top fo each other, essentially reducing how bright each layer needs to individually be in order to hit a specific cumulative brightness. By combining multiple OLED pixels running at a lower brightness, tandem OLED displays are intended to offer higher brightness and durability than traditional single panel OLED displays, reducing the wear on the organic materials in normal situations – and by extension, making it possible to crank up the brightness of the panels well beyond what a single panel could sustain without cooking itself. Overall, LG claims that tandem panels can hit over three-times the brightness of standard OLED panels.
The switch to tandem panels also comes with energy efficiency benefits, as the power consumption of OLED pixels is not linear with the output brightness. According to LG, their tandem panels consume up to 40% less power. More interesting from the manufacturing side of matters, LG's tandem panel stack is 40% thinner (and 28%) lighter than existing OLED laptop screens, despite having to get a whole second layer of pixels in there.
In terms of specifications, the 13-inch tandem OLED panel feature a WQXGA+ (2880×1800) resolution and can cover 100% of the DCI-P3 color gamut. The panel is also certified to meet VESA's Display HDR True Black 500 requirements, which among other things, requires that it can hit 500 nits of brightness. And given that this tech is meant to go into tablets and laptops, it shouldn't come as any surprise that the display panel is also touch sensitive, as well.
"We will continue to strengthen the competitiveness of OLED products for IT applications and offer differentiated customer value based on distinctive strengths of Tandem OLED, such as long life, high brightness, and low power consumption," said Jae-Won Jang, Vice President and Head of the Medium Display Product Planning Division at LG Display.
Without any doubts, LG's Tandem OLED display panel looks impressive. The company is banking on it doing well in the high-end laptop and tablet markets, where manufacturers have been somewhat hesitant to embrace OLED displays due to power concerns. The technology has already been adopted by Apple for their most recent iPad Pro tablets, and now LG is making it available to a wider group of OEMs.
What remains to be seen is the technology's cost. Computer-grade OLED panels are already a more expensive option, and this one ups the ante with two layers of OLED pixels. So it isn't a question of whether it will be reserved for premium, high-margin devices, but a matter of just how much it will add to the final price tag.
For now, LG Display does not disclose which PC OEMs are set to use its 13-inch Tandem OLED panel, though as the company is a supplier to virtually all of the PC OEMs, there's little doubt it should crop up in multiple laptops soon enough.
DisplaysG.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
MemoryG.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
MemoryAs GPU families enter the later part of their lifecycles, we often see chip manufacturers start to offload stockpiles of salvaged chips that, for one reason or another, didn't make the grade for the tier of cards they normally are used in. These recovered chips are fairly unremarkable overall, but they are unsold silicon that still works and has economic value, leading to them being used in lower-tier cards so that they can be sold. And, judging by the appearance of a new video card design from MSI, it looks like NVIDIA's Ada Lovelace generation of chips has reached that stage, as the Taiwanese video card maker has put out a new GeForce RTX 4070 Ti Super card based on a salvaged AD102 GPU.
Typically based on NVIDIA's AD103 GPU, NVIDIA's GeForce RTX 4070 Ti Super series sits a step below the company's flagship RTX 4080/4090 cards, both of which are based on the bigger and badder AD102 chip. But with some number of AD102 chips inevitably failing to live up to RTX 4080 specifications, rather than being thrown out, these chips can instead be used to make RTX 4070 cards. Which is exactly what MSI has done with their new GeForce RTX 4070 Ti Super Ventus 3X Black OC graphics card.
The card itself is relatively unremarkable – using a binned AD102 chip doesn't come with any advantages, and it should perform just like regular AD103 cards – and for that reason, video card vendors rarely publicly note when they're doing a run of cards with a binned-down version of a bigger chip. However, these larger chips have a tell-tale PCB footprint that usually makes it obvious what's going on. Which, as first noticed by @wxnod, is exactly what's going on with MSI's card.

Ada Lovelace Lineup: MSI GeForce RTX 4070 TiS (AD103), RTX 4070 TiS (AD102), & RTX 4090 (AD102)
The tell, in this case, is the rear board shot provided by MSI. The larger AD102 GPU uses an equally larger mounting bracket, and is paired with a slightly more complex array of filtering capacitors on the back side of the board PCB. Ultimately, since these are visible in MSI's photos of their GeForce RTX 4070 Ti Super Ventus 3X Black OC, it's easy to compare it to other video cards and see that it has exactly the same capacitor layout as MSI's GeForce RTX 4090, thus confirming the use of an AD102 GPU.
Chip curiosities aside, all of NVIDIA GeForce RTX 4070 Ti Super graphics cards – no matter whether they are based on the AD102 or AD103 GPU – come with a GPU with 8,448 active CUDA cores and 16 GB of GDDR6X memory, so it doesn't (typically) matter which chip they carry. Otherwise, compared to a fully-enabled AD102 chip, the RTX 4070 Ti Super specifications are relatively modest, with fewer than half as many CUDA cores, underscoring how the AD102 chip being used in MSI's card is a pretty heavy salvage bin.
As for the rest of the card, MSI GeForce RTX 4070 Ti Super Ventus 3X Black OC is a relatively hefty card overall, with a cooling system to match. Being overclocked, the Ventus also has a slightly higher TDP than normal GeForce RTX 4070 Ti Super cards, weighing in at 295 Watts, or 10 Watts above baseline cards.
Meanwhile, MSI is apparently not the only video card manufacturer using salvaged AD102 chips for GeForce RTX 4070 Ti Super, either. @wxnod has also posted a screenshot obtained on an Inno3D GeForce RTX 4070 Ti Super based on an AD102 GPU.
GPUsThe USB Implementers Forum (USB-IF) introduced USB4 version 2.0 in fall 2022, and it expects systems and devices with the tech to emerge later this year and into next year. These upcoming products will largely rely on Intel's Barlow Ridge controller, a full-featured Thunderbolt 5 controller that goes above and beond the baseline USB4 v2 spec. And though extremely capable, Intel's Thunderbolt controllers are also quite expensive, and Barlow Ridge isn't expected to be any different. Fortunately, for system and device vendors that just need a basic USB4 v2 solution, ASMedia is also working on its own USB4 v2 controller.
At Computex 2024, ASMedia demonstrated a prototype of its upcoming USB4 v2 physical interface (PHY), which will support USB4 v2's new Gen 4 (160Gbps) data rates and the associated PAM-3 signal encoding. The prototype was implemented using an FPGA, as the company yet has to tape out the completed controller.
Ultimately, the purpose of showing off a FPGA-based PHY at Computex was to allow ASMedia to demonstrate their current PHY design. With the shift to PAM-3 encoding for USB4 v2, ASMedia (and the rest of the USB ecosystem) must develop significantly more complex controllers – and there's no part of that more critical than a solid and reliable PHY design.
As part of their demonstration, ASMedia had a classic eye diagram display. The eye diagram demoed has a clear opening in the center, which is indicative of good signal integrity, as the larger the eye opening, the less distortion and noise in the signal. The horizontal width of the eye opening represents the time window in which the signal can be sampled correctly, so the relatively narrow horizontal spread of the eye opening suggests that there is minimal jitter, meaning the signal transitions are consistent and predictable. Finally, the vertical height of the eye opening indicates the signal amplitude and the rather tall eye opening suggests a higher signal-to-noise ratio (SNR), meaning that the signal is strong compared to any noise present.
ASMedia itself is one of the major suppliers for discrete USB controllers, so the availability of ASMedia's USB4 v2 chip is crucial for adoption of the standard in general. While Intel will spearhead the industry with their Barlow Ridge Thunderbolt 5/USB4 v2 controller, ASMedia's controller is poised to end up in a far larger range of devices. So the importance of the company's USB4 v2 PHY demo is hard to overstate.
Demos aside, ASMedia is hoping to tape the chip out soon. If all goes well, the company expects their first USB4 v2 controllers to hit the market some time in the second half of 2025.
PeripheralsG.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.
With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.
Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.
G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.
The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.
G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.
The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.
Memory
0 Comments