Is that a tier in your eye – or is your EDA storage obsolete?

We’ve all come to expect to have our data from our corporate laptop or workstation: e-mails, schedules, papers, music & videos, etc. backed-up automatically. Some less often accessed data, like archived e-mail, aren’t kept locally blue digital binary data on computer screen. Close-up shallow DOFto save disk space. When you access these files, you find that it’s slower to open. If the archive is very old, say a year or more, then you might even have to ask IT to “restore” from tape before you can open it. In the storage world, this process of moving data between different types of storage is called data tiering and is done to optimize performance and cost. Since ASIC/SoC design is all about turnaround time, time-to-market and shrinking budgets, it’s important to know how tiering impacts your EDA tool flow and what you can do to influence it.

In most enterprises there are multiple levels of tiering, where each offers different capacity/performance/cost ratios. The highest performance tier is reserved typically for the most critical applications because it is the most expensive, and with the least storage density. This tier, typically referred to as Tier “0”, is complemented by progressively lower performance, higher density (and lower cost) tiers (1, 2, 3, etc.). Tiers are generally made using different types of drives. For example, a storage cluster might include Tier 0 storage made using very high performance, low capacity solid-state drives (SSDs); Tier 1 storage made of high-capacity, high-performance Serial-attached SCSI (SAS) drives, and Tier 2 storage consisting of high-capacity Serial-ATA (SATA) drives.

While ideally all EDA projects would be run on Tier 0 storage (if space is available), it is highly desirable to move to lower cost tiers whenever possible to conserve budget.  Often this is done after a project has gone into production and design teams have moved on to the next project. This isn’t always the case, however, especially if tiering is managed manually. (Surprisingly, many semiconductor design companies today have deployed enterprise storage solutions that don’t support automated tiering).

Given the complexities and tight schedules involved in today’s semiconductor designs, it is not uncommon to find and fix a bug only a few weeks away from tape out. When this happens, sometimes you need to urgently allocate Tier-0 storage space in order to run last-minute regressions. If Tier-0 space is being managed manually and space is limited, you may have to wait for IT to move a different project’s data around before they can get to you.  From a management perspective, this is even more painful when it’s your old data, because you’ve been paying a premium to store it there unnecessarily!

The opposite scenario is also common: a project that’s already in production has had its data moved to lower cost storage to save budget. Later a critical problem is discovered that needs to be debugged.  In this scenario, do you try to run your EDA tools using the slower storage or wait for IT to move your data to Tier-0 storage and benefit from reduced simulation turn-around times?  It depends on how long it takes to make the transition. If someone else’s project data needs to be moved first, the whole process becomes longer and less predictable.

Isilon_Image_resizedWhile it may seem difficult to believe that tiering is managed manually, the truth is that most EDA tool flows today are using storage platforms that don’t support automated tiering. That could be due, at least in part, to their “scale-up” architecture which tends to create “storage silos” where each volume (or tier) of data is managed individually (and manually). Solutions such as EMC Isilon use a more modern “scale-out” architecture that lends itself better to support auto-tiering. Isilon, for example, features SmartPools which can seamlessly auto-tier EDA data – minimizing EDA turnaround time when you needed it and reducing cost when you don’t.

For EDA teams facing uncertain budgets and shrinking schedules, the benefits of automated tiering can be signification. With Isilon, for example, you can configure your project, in advance, to be allocated the fastest storage tier during simulation regressions (when you need performance), and then at some point after tape out (ex: 6 months), your project data will move it to a lower cost, less performance-critical tier. Eventually, while you’re sitting on a beach enjoying your production bonus, Isilon will move your data to an even lower tier for long-term storage – saving your team even more money. And if later, after the Rum has worn off,  you decide to review your RTL – maybe for reuse on a future project – Isilon will move that data to a faster tier, leaving the rest available at any time, but on lower cost storage. So next time you get your quarterly storage bill from IT, you should ask yourself “what’s lurking behind that compute farm – and does it support auto-tiering?”

About the Author: Lawrence Vivolo

Topics in this article