Systems & Design
SPONSOR BLOG

Accelerate SSD Software Development And System Validation

Start development early on the complex firmware required by next generation SSDs.

popularity

The amount of data coming at us or that we produce ourselves in our daily lives continues to grow exponentially. It’s become the norm to stream movies and TV series from Netflix, as well as upload our own videos on YouTube. On top of this, a major shift in automotive (ADAS, autonomous driving) and surveillance are boosting the amount of data exchange that is happening every second.

With this growth in data there has been an equivalent growth in data storage needs. A smartphone with 16GB of storage is insufficient to store our games, music and videos. Many smart devices around us require some level of data storage. Our car now needs storage for maps, music and even video recording (to be used in case of an accident). And these are just the visible or consumer devices. The demand for storage is even bigger in the enterprise space where all services that we benefit from require vast amounts of terabytes and petabytes (1024 terabytes).

Not only is there a need for a lot of data, we want instant access to the data. One key metric is the speed at which data can be accessed on a particular storage device, be it the startup time of the device or the read latency time. Because of this there has been a steady move towards Solid State Drives (SSD) as they offer almost instantaneous start up, and very fast random access time. And SSDs have other advantages: they are pretty much silent as they don’t have moving parts, compared to Hard Disk Drives (HDD) which require spinning of the disk to read the data.

While SSDs offer many benefits, this comes at the expense of higher complexity both in terms of hardware, but especially when it comes to the software needed to make the devices work. SSDs require complex firmware to deal with flash memory wear and corruption.

Wikipedia defines wear leveling as a technique for prolonging the service life of some kinds of erasable computer storage media, such as flash memory used in SSDs. A flash memory storage system with no wear leveling will not last very long if data is written to the flash. Without wear leveling, the underlying flash controller must permanently assign the logical addresses from the operating system (OS) to the physical addresses of the flash memory. This means that every write to a previously written block must first be read, erased, modified, and re-written to the same location. This approach is very time-consuming and frequently written locations will wear out quickly, while other locations will not be used at all. Once a few blocks reach their end of life, such a device becomes inoperable.

Flash corruption is explained as follows. Any system that contains routines which write or erase flash memory involves some risk that the flash write/erase routines will execute if the CPU is operating outside its defined operating range of VDD, temperature, or system clock frequency. The goal is to minimize this risk by enabling flash writes and erases as little as possible (only one place in code can write to flash; only one place in code can erase flash) and by ensuring that the CPU is always operating in a defined mode.

On top of the need to deal with flash memory wear and corruption, the SSD software has to be able to deal with a wide variety of host interfaces including PCIe, NVMe, SAS and SATA. And the overall SSD SoC, including hardware and software, has to be validated for many benchmarks across different application domains.

With ever increasing pressure to bring new SoCs to market with increased storage capacity, faster performance and lower cost, storage semiconductor companies have to start their firmware development as early as possible in their next generation SSD design cycle.

Many storage semiconductor companies have embraced an end-to-end pre-silicon software bring-up and controller SoC validation methodology leveraging virtual prototyping, emulation and FPGA-based prototyping. So instead of waiting for a test chip of the SoC to arrive to start software development, SystemC modeling and early RTL mapping onto emulation and FPGA prototyping platforms is used to pull in (or shift left) the software development. While each of these techniques can be used independently, maximum benefit is achieved by deploying a methodology that leverages models, interfaces, analysis tools between the different techniques. Let’s explore how an end-to-end software development and system validation flow should look.

Virtual prototyping provides the earliest executable to start firmware development. Because virtual prototypes are based on SystemC models, there is no dependency on RTL availability and having a fully verified RTL version of the SoC under design. Moreover, the virtual environment enables fast performance, great debug visibility and control and full deterministic execution supporting fault injection. This offers firmware development the ideal environment to start the development and testing of their software.

The next stage in the end-to-end development and validation process is to load an early version of the SSD RTL onto an emulator. Through hybrid emulation in which a virtual prototype is connected to the emulator, it is possible to leverage existing models of the design, e.g. of the control processor and first focus on verifying and validating the custom IP and SoC pieces on the emulator. To truly benefit from a virtual prototyping to emulation methodology, it is important that the emulator can work with virtual interfaces. This enable easier bring up, faster performance and better control and visibility and eases the transition from virtual prototype to emulator. Smart transactor technology that is integrated with the hardware and software debuggers enables fast memory profiling to explore and optimize the system performance. To help SSD developers, it is important to visualize the embedded software function stack, register programming, ONFI bus performance statistics and enable easy correlation between them.

Eventually, it is important to validate the full SSD hardware and software in context of real world interfaces. Emulation transactors can be replaced with speed adaptors for targeted protocols as a first step in the validation activities. The reuse of these speed adaptors by FPGA-based prototypes eases the transition to the last important step in the pre-silicon system validation. Once the prototype is up and running with these speed adaptors, the prototype can be optimized for performance and run real-time through the usage of dedicated daughter boards of the actual interfaces. This allows for full validation of the SSD hardware and software and enables stress testing under conditions similar to the actual deployment targets.

While SSD development will continue to be quite an art to perfect and something that we gratefully benefit from through use in our electronics devices and enterprise deployment, the entire software development, verification and validation effort has been made easier with a carefully selected end-to-end pre-silicon methodology leveraging virtual prototyping, emulation and FPGA-based prototyping.

To learn more about one particular part of the methodology discussed in this blog, I recommend attending the upcoming SNUG presentation by ChunHok Ho of SK Hynix. He will explain the key prototyping requirements for modern SSD designs and the solution they use to deliver high-performance multi-FPGA solutions to their firmware developers to enable software testing with real-world interfaces while still offering powerful debug techniques to resolve hardware/software issues.

For more information about this event, please go to : https://event.synopsys.com/ehome/396238/agenda/?&t=ef520007218fcd08eac8a4c15f3d7095



Leave a Reply


(Note: This name will be displayed publicly)