As the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.
PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.
The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.
The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.
We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.
The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.
OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.
StorageA few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.
For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.
The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.
Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.
StorageUnder the CHIPS & Science Act, the U.S. government provided tens of billions of dollars in grants and loans to the world's leading maker of chips, such as Intel, Samsung, and TSMC, which will significantly expand the country's semiconductor production industry in the coming years. However, most chips are typically tested, assembled, and packaged in Asia, which has left the American supply chain incomplete. Addressing this last gap in the government's domestic chip production plans, these past couple of weeks the U.S. government signed memorandums of understanding worth about $1.5 billion with Amkor and SK hynix to support their efforts to build chip packaging facilities in the U.S.
Amkor plans to build a $2 billion advanced packaging facility near Peoria, Arizona, to test and assemble chips produced by TSMC at its Fab 21 near Phoenix, Arizona. The company signed a MOU that offers $400 million in direct funding and access to $200 million in loans under the CHIPS & Science Act. In addition, the company plans to take advantage of a 25% investment tax credit on eligible capital expenditures.
Set to be strategically positioned near TSMC's upcoming Fab 21 complex in Arizona, Amkor's Peoria facility will occupy 55 acres and, when fully completed, will feature over 500,000 square feet (46,451 square meters) of cleanroom space, more than twice the size of Amkor's advanced packaging site in Vietnam. Although the company has not disclosed the exact capacity or the specific technologies the facility will support, it is expected to cater to a wide range of industries, including automotive, high-performance computing, and mobile technologies. This suggests the new plant will offer diverse packaging solutions, including traditional, 2.5D, and 3D technologies.
Amkor has collaborated extensively with Apple on the vision and initial setup of the Peoria facility, as Apple is slated to be the facility's first and largest customer, marking a significant commitment from the tech giant. This partnership highlights the importance of the new facility in reinforcing the U.S. semiconductor supply chain and positioning Amkor as a key partner for companies relying on TSMC's manufacturing capabilities. The project is expected to generate around 2,000 jobs and is scheduled to begin operations in 2027.
This week SK hynix also signed a preliminary agreement with the U.S. government to receive up to $450 million in direct funding and $500 million in loans to build an advanced memory packaging facility in West Lafayette, Indiana.
The proposed facility is scheduled to begin operations in 2028, which means that it will assemble HBM4 or HBM4E memory. Meanwhile, DRAM devices for high bandwidth memory (HBM) stacks will still be produced in South Korea. Nonetheless, packing finished HBM4/HBM4E in the U.S. and possibly integrating these memory modules with high-end processors is a big deal.
In addition to building its packaging plant, SK hynix plans to collaborate with Purdue University and other local research institutions to advance semiconductor technology and packaging innovations. This partnership is intended to bolster research and development in the region, positioning the facility as a hub for AI technology and skilled employment.
SemiconductorsA few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.
For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.
The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.
Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.
StorageUnder the CHIPS & Science Act, the U.S. government provided tens of billions of dollars in grants and loans to the world's leading maker of chips, such as Intel, Samsung, and TSMC, which will significantly expand the country's semiconductor production industry in the coming years. However, most chips are typically tested, assembled, and packaged in Asia, which has left the American supply chain incomplete. Addressing this last gap in the government's domestic chip production plans, these past couple of weeks the U.S. government signed memorandums of understanding worth about $1.5 billion with Amkor and SK hynix to support their efforts to build chip packaging facilities in the U.S.
Amkor plans to build a $2 billion advanced packaging facility near Peoria, Arizona, to test and assemble chips produced by TSMC at its Fab 21 near Phoenix, Arizona. The company signed a MOU that offers $400 million in direct funding and access to $200 million in loans under the CHIPS & Science Act. In addition, the company plans to take advantage of a 25% investment tax credit on eligible capital expenditures.
Set to be strategically positioned near TSMC's upcoming Fab 21 complex in Arizona, Amkor's Peoria facility will occupy 55 acres and, when fully completed, will feature over 500,000 square feet (46,451 square meters) of cleanroom space, more than twice the size of Amkor's advanced packaging site in Vietnam. Although the company has not disclosed the exact capacity or the specific technologies the facility will support, it is expected to cater to a wide range of industries, including automotive, high-performance computing, and mobile technologies. This suggests the new plant will offer diverse packaging solutions, including traditional, 2.5D, and 3D technologies.
Amkor has collaborated extensively with Apple on the vision and initial setup of the Peoria facility, as Apple is slated to be the facility's first and largest customer, marking a significant commitment from the tech giant. This partnership highlights the importance of the new facility in reinforcing the U.S. semiconductor supply chain and positioning Amkor as a key partner for companies relying on TSMC's manufacturing capabilities. The project is expected to generate around 2,000 jobs and is scheduled to begin operations in 2027.
This week SK hynix also signed a preliminary agreement with the U.S. government to receive up to $450 million in direct funding and $500 million in loans to build an advanced memory packaging facility in West Lafayette, Indiana.
The proposed facility is scheduled to begin operations in 2028, which means that it will assemble HBM4 or HBM4E memory. Meanwhile, DRAM devices for high bandwidth memory (HBM) stacks will still be produced in South Korea. Nonetheless, packing finished HBM4/HBM4E in the U.S. and possibly integrating these memory modules with high-end processors is a big deal.
In addition to building its packaging plant, SK hynix plans to collaborate with Purdue University and other local research institutions to advance semiconductor technology and packaging innovations. This partnership is intended to bolster research and development in the region, positioning the facility as a hub for AI technology and skilled employment.
SemiconductorsWestern Digital's BiCS8 218-layer 3D NAND is being put to good use in a wide range of client and enterprise platforms, including WD's upcoming Gen 5 client SSDs and 128 TB-class datacenter SSD. On the external storage front, the company demonstrated four different products: for card-based media, 4 TB microSDUC and 8 TB SDUC cards with UHS-I speeds, and on the portable SSD front we had two 16 TB drives. One will be a SanDisk Desk Drive with external power, and the other in the SanDisk Extreme Pro housing with a lanyard opening in the case.
All of these are using BiCS8 QLC NAND, though I did hear booth talk (as I was taking leave) that they were not supposed to divulge the use of QLC in these products. The 4 TB microSDUC and 8 TB SDUC cards are rated for UHS-I speeds. They are being marketed under the SanDisk Ultra branding.
The SanDisk Desk Drive is an external SSD with a 18W power adapter, and it has been in the market for a few months now. Initially launched in capacities up to 8 TB, Western Digital had promised a 16 TB version before the end of the year. It appears that the product is coming to retail quite soon. One aspect to note is that this drive has been using TLC for the SKUs that are currently in the market, so it appears unlikely that the 16 TB version would be QLC. The units (at least up to the 8 TB capacity point) come with two SN850XE drives. Given the recent introduction of the 8 TB SN850X, an 'E' version with tweaked firmware is likely to be present in the 16 TB Desk Drive.
The 16 TB portable SSD in the SanDisk Extreme housing was a technology demonstration. It is definitely the highest capacity bus-powered portable SSD demonstrated by any vendor at any trade show thus far. Given the 16 TB Desk Drive's imminent market introduction, it is just a matter of time before the technology demonstration of the bus-powered version becomes a retail reality.
StorageA few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.
For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.
The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.
Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.
Storage
0 Comments