Microchip recently announced the availability of their second PCIe Gen 5 enterprise SSD controller - the Flashtec 5016. Like the 4016, this is also a 16-channel controller, but there are some key updates:
Microchip's enterprise SSD controllers provide a high level of flexibility to SSD vendors by providing them with significant horsepower and accelerators. The 5016 includes Cortex-A53 cores for SSD vendors to run custom applications relevant to SSD management. However, compared to the Gen4 controllers, there are two additional cores in the CPU cluster. The DRAM subsystem includes ECC support (both out-of-band and inline, as desired by the SSD vendor).
At FMS 2024, the company demonstrated an application of the neural network engines embedded in the Gen5 controllers. Controllers usually employ a 'read-retry' operation with altered read-out voltages for flash reads that do not complete successfully. Microchip implemented a machine learning approach to determine the read-out voltage based on the health history of the NAND block using the NN engines in the controller. This approach delivers tangible benefits for read latency and power consumption (thanks to a smaller number of errors on the first read).
The 4016 and 5016 come with a single-chip root of trust implementation for hardware security. A secure boot process with dual-signature authentication ensures that the controller firmware is not maliciously altered in the field. The company also brought out the advantages of their controller's implementation of SR-IOV, flexible data placement, and zoned namespaces along with their 'credit engine' scheme for multi-tenant cloud workloads. These aspects were also brought out in other demonstrations.
Microchip's press release included quotes from the usual NAND vendors - Solidigm, Kioxia, and Micron. On the customer front, Longsys has been using Flashtec controllers in their enterprise offerings along with YMTC NAND. It is likely that this collaboration will continue further using the new 5016 controller.
StorageKioxia Details BiCS 8 NAND at FMS 2024: 218 Layers With Superior Scaling Kioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital announced the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's 2Tb QLC NAND device and coverage of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights. Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above. The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit. Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the G9 - has 276 layers with a bit density in TLC mode of 21 Gbit/mm2, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm2. It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia. Storage
Kioxia Demonstrates RAID Offload Scheme for NVMe Drives At FMS 2024, Kioxia had a proof-of-concept demonstration of their proposed a new RAID offload methodology for enterprise SSDs. The impetus for this is quite clear: as SSDs get faster in each generation, RAID arrays have a major problem of maintaining (and scaling up) performance. Even in cases where the RAID operations are handled by a dedicated RAID card, a simple write request in, say, a RAID 5 array would involve two reads and two writes to different drives. In cases where there is no hardware acceleration, the data from the reads needs to travel all the way back to the CPU and main memory for further processing before the writes can be done. Kioxia has proposed the use of the PCIe direct memory access feature along with the SSD controller's controller memory buffer (CMB) to avoid the movement of data up to the CPU and back. The required parity computation is done by an accelerator block resident within the SSD controller. In Kioxia's PoC implementation, the DMA engine can access the entire host address space (including the peer SSD's BAR-mapped CMB), allowing it to receive and transfer data as required from neighboring SSDs on the bus. Kioxia noted that their offload PoC saw close to 50% reduction in CPU utilization and upwards of 90% reduction in system DRAM utilization compared to software RAID done on the CPU. The proposed offload scheme can also handle scrubbing operations without taking up the host CPU cycles for the parity computation task. Kioxia has already taken steps to contribute these features to the NVM Express working group. If accepted, the proposed offload scheme will be part of a standard that could become widely available across multiple SSD vendors. Storage
Later this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).
FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.
Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.
At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.
Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.
StorageLater this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).
FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.
Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.
At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.
Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.
StorageSamsung this week has unveiled its latest process technologies roadmap at the company's Samsung Foundry Forum (SFF) U.S. The new plan covers the evolution of Samsung's 2nm-class production nodes through 2027, including a process technology with a backside power delivery, re-emphasizing plans to bring out a 1.4nm-class node in 2027, and the introduction of a 'high value' 4nm-class manufacturing tech.
Samsung Foundry's key announcements for today are clearly focused on the its 2nm-class process technologies, which are set to enter production in 2025 and will span to 2027, when the company's 1.4-nm class production node is set to enter the scene. Samsung is also adding (or rather, renaming) another 2nm-class node to their roadmap with SF2, which was previously disclosed by Samsung as SF3P and aimed at high-performance devices.
"We have refined and improved the SF3P, resulting in what we now refer to as SF2," a Samsung spokesperson told AnandTech. "This enhanced node incorporates various process design improvements, delivering notable power, performance, and area (PPA) benefits."
Samsung Foundry for Leading-Edge Nodes Announced on June 12, 2024 Compiled by AnandTech |
||||||||
HVM Start | 2023 | 2024 | 2025 | 2026 | 2027 | 2027 | ||
Process | SF3E | SF3 | SF2 (aka SF3P) |
SF2P/SF2X | SF2Z | SF1.4 | ||
FET | GAAFET | |||||||
Power Delivery | Frontside | Backside (BSPDN) | ? | |||||
EUV | 0.33 NA EUV | ? | ? | ? | ? |
This is another example of a rebranding of leading-edge fabrication nodes in the recent years by a major chipmaker. Samsung Foundry is not disclosing any specific PPA improvements SF3P has over SF2, and for now is only stating in high-level terms that it will be a better-performing node than the planned SF3P.
Meanwhile, this week's announcement also includes new information on Samsung's next batch of process nodes, which are planned for 2026 and 2027. In 2026 Samsung will have SF2P, a further refinement of SF2 which incorporates 'faster' yet less dense transistors. That will be followed up in 2027 with SF2Z, which adds backside power delivery to the mix for better and higher quality power delivery. In particular, Samsung is targetting voltate drop (aka IR drop) here, which is an ongoing concern in chip design.
Finally, SF1.4, a 1.4nm-class node, is on track for 2027 as well. Interestingly, however, it looks like it does not feature a backside power delivery. Which, per current roadmaps, would have Samsung as the only foundry not using BSPDN for their first 1.4nm/14Å-class node.
"We have optimized BSPDN and incorporated it for the first time in the SF2Z node we announced today," the spokesperso... Semiconductors
Kioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital announced the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's 2Tb QLC NAND device and coverage of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights.
Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above.
The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit.
Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the G9 - has 276 layers with a bit density in TLC mode of 21 Gbit/mm2, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm2.
It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia.
StorageLater this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
Later this year Intel is set to introduce its Xeon 6-branded processors, codenamed Granite Rapids (6x00P) and Sierra Forest (6x00E). And with it will come a new slew of server motherboards and pre-built server platforms to go with it. On the latter note, this will be the first generation where Intel won't be offering any pre-builts of its own, after selling that business off to MiTAC last year.
To that end, MiTAC and its subsidiary Tyan were at this year's event to demonstrate what they've been up to since acquiring Intel's server business unit, as well as to show off the server platforms they're developing for the Xeon 6 family. Altogether, the companies had two server platforms on display – a compact 2S system, and a larger 2S system with significant expansion capabilities – as well as a pair of single-socket designs from Tyan.
The most basic platform that MiTAC had to show is their TX86-E7148 (Katmai Pass), a half-width 1U system that's the successor to Intel's D50DNP platform. Katmai Pass has two CPU sockets, supports up to 2 TB of DDR5-6400 RDIMMs over 16 slots (8 per CPU), and has two low-profile PCIe 5.0 x16 slots. Like its predecessor, this platform is aimed at mainstream servers that do not need a lot of storage or room to house bulky add-in cards like AI accelerators.
The company's other platform is TX77A-E7142 (Deer Creek Pass), a considerably more serious offering that replaces Intel's M50FCP platform. This board can house up to 4 TB of DDR5-6400 RDIMMs over 32 slots (16 per CPU with 2DPC), four PCIe 5.0 x16 slots, one PCIe 5.0 x8 slot, two OCP 3.0 slots, and 24 hot-swap U.2 bays. Deer Creek Pass can be used both for general-purpose workloads, high-performance storage, as well as workloads that require GPUs or other special-purpose accelerators.
Meanwhile Tyan had the single-socket Thunder CX GC73A-B5660 on display. That system supports up to 2 TB of DDR5-6400 memory over 16 RDIMMs and offers two PCIe 5.0 x16 slots, one PCIe 4.0 x4 M.2 slot, two OCP 3.0 slots, and 12 hot-swappable U.2 drive bays.
Finally, Tyan's Thunder HX S5662 is an HPC server board specifically designed to house multiple AI accelerators and other large PCIe cards. This board supports one Xeon 6 6700 processor, up to 1 TB of memory over eight DDR5-6400 RDIMMs, and has five tradiitonal PCIe 5.0 x16 slots as well as two PCIe 5.0 x2 M.2 slots for storage.
MiTAC is expected to start shipments of these new Xeon 6 motherboards in the coming months, as Intel rolls out its next-generation datacenter CPUs. Pricing of these platforms is unknown for now, but expect it to be comparable to... Servers
At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).
FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.
Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.
At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.
Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.
Storage
0 Comments