A few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.
For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.
The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.
Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.
StoragePCI-SIG Demonstrates PCIe 6.0 Interoperability at FMS 2024 As the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem. PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process. The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects. The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations. We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year. The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here. OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative. Storage
U.S. Signs $1.5B in CHIPS Act Agreements With Amkor and SKhynix for Chip Packaging Plants Under the CHIPS & Science Act, the U.S. government provided tens of billions of dollars in grants and loans to the world's leading maker of chips, such as Intel, Samsung, and TSMC, which will significantly expand the country's semiconductor production industry in the coming years. However, most chips are typically tested, assembled, and packaged in Asia, which has left the American supply chain incomplete. Addressing this last gap in the government's domestic chip production plans, these past couple of weeks the U.S. government signed memorandums of understanding worth about $1.5 billion with Amkor and SK hynix to support their efforts to build chip packaging facilities in the U.S.
Later this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).
FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.
Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.
At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.
Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.
StorageLater this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).
FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.
Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.
At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.
Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.
StorageSamsung this week has unveiled its latest process technologies roadmap at the company's Samsung Foundry Forum (SFF) U.S. The new plan covers the evolution of Samsung's 2nm-class production nodes through 2027, including a process technology with a backside power delivery, re-emphasizing plans to bring out a 1.4nm-class node in 2027, and the introduction of a 'high value' 4nm-class manufacturing tech.
Samsung Foundry's key announcements for today are clearly focused on the its 2nm-class process technologies, which are set to enter production in 2025 and will span to 2027, when the company's 1.4-nm class production node is set to enter the scene. Samsung is also adding (or rather, renaming) another 2nm-class node to their roadmap with SF2, which was previously disclosed by Samsung as SF3P and aimed at high-performance devices.
"We have refined and improved the SF3P, resulting in what we now refer to as SF2," a Samsung spokesperson told AnandTech. "This enhanced node incorporates various process design improvements, delivering notable power, performance, and area (PPA) benefits."
Samsung Foundry for Leading-Edge Nodes Announced on June 12, 2024 Compiled by AnandTech |
||||||||
HVM Start | 2023 | 2024 | 2025 | 2026 | 2027 | 2027 | ||
Process | SF3E | SF3 | SF2 (aka SF3P) |
SF2P/SF2X | SF2Z | SF1.4 | ||
FET | GAAFET | |||||||
Power Delivery | Frontside | Backside (BSPDN) | ? | |||||
EUV | 0.33 NA EUV | ? | ? | ? | ? |
This is another example of a rebranding of leading-edge fabrication nodes in the recent years by a major chipmaker. Samsung Foundry is not disclosing any specific PPA improvements SF3P has over SF2, and for now is only stating in high-level terms that it will be a better-performing node than the planned SF3P.
Meanwhile, this week's announcement also includes new information on Samsung's next batch of process nodes, which are planned for 2026 and 2027. In 2026 Samsung will have SF2P, a further refinement of SF2 which incorporates 'faster' yet less dense transistors. That will be followed up in 2027 with SF2Z, which adds backside power delivery to the mix for better and higher quality power delivery. In particular, Samsung is targetting voltate drop (aka IR drop) here, which is an ongoing concern in chip design.
Finally, SF1.4, a 1.4nm-class node, is on track for 2027 as well. Interestingly, however, it looks like it does not feature a backside power delivery. Which, per current roadmaps, would have Samsung as the only foundry not using BSPDN for their first 1.4nm/14Å-class node.
"We have optimized BSPDN and incorporated it for the first time in the SF2Z node we announced today," the spokesperso... Semiconductors
Kioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital announced the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's 2Tb QLC NAND device and coverage of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights.
Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above.
The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit.
Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the G9 - has 276 layers with a bit density in TLC mode of 21 Gbit/mm2, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm2.
It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia.
StorageLater this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
Later this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).
FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.
Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.
At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.
Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.
Storage
0 Comments