Panasas launches storage tiering by measurement as a replace of utilization

Panasas launches storage tiering by measurement as a replace of utilization

Konstantin Emelyanov – Fotolia

Dynamic Knowledge Acceleration moves recordsdata to media relying on file measurement and claims performance advantage over rival because it gets the most attention-grabbing out of flash and spinning disk laborious drives

Antony Adshead

By

Published: 18 Aug 2020 10: 45

Tiered storage formulation matching recordsdata to the cost of – and for that reason fact the performance of –media. So, your most extreme recordsdata also can simply peaceful drag on smartly-organized-fleet stable yelp. Shouldn’t it?

No longer in response to Panasas no longer too lengthy within the past added tiering basically based on file measurement to its PanFS scale-out NAS.

Panasas’s Dynamic Knowledge Acceleration is a circulate geared toward providing prospects with storage to swimsuit a massive diversity of HPC (excessive performance computing) and AI/ML workloads by exploiting the dart of SSD for dinky recordsdata and the massed throughput of HDD for gigantic recordsdata.

Dynamic Knowledge Acceleration tiers recordsdata to different media at some stage within the storage system, however no longer by utilization characteristics. Its tiering by file measurement claims a 2x performance advantage in GBps terms over file system competitors BeeGFS, Lustre and IBM’s GPFS/Spectrum Scale.

All of it sounds reasonably counter-intuitive, because of you determine on your most performance-hungry recordsdata on your most performant storage, don’t you?

Smartly, yes, however Panasas is convinced that tiering by file measurement is a bigger procedure of reaching that.

What happens in Dynamic Knowledge Acceleration is that on ingest all metadata is placed on smartly-organized-fleet NVDIMM. In the meantime, dinky recordsdata are routed to file system storage on low latency, excessive-bandwidth stable yelp drives (SSD) and bigger recordsdata head to low-cost excessive-skill spinning disk HDDs. “We hang our technique is healthier than tiering by temperature [ie, data usage],” acknowledged Curtis Anderson, senior system architect at Panasas.

The core idea is that file measurement is the vital variable in what’s required of storage for HPC and AI workloads. In other phrases, at dinky file sizes I/O is required and providing IOPS by the utilization of SSD is required. When file sizes receive bigger it’s all about sequential receive correct of entry to with bandwidth equipped by a couple of laborious drives within the Panasas parallel file system, PanFS.

“With historical recordsdata temperature-basically based tiering it should also also be complicated, with the buyer desiring to administer the tiers and the administration of recordsdata between them. You furthermore mght discontinue up with sizzling recordsdata that is on very performant media and cold recordsdata on slower media,” acknowledged Anderson. “So, what you can discontinue up with is inconsistent performance, while you haven’t bustle a particular utility for per week, to illustrate.

“If we coarse tiering on measurement then HDDs are continuously elevated-performing because of they are being extinct within the most productive procedure which you might perhaps take into consideration, delivering 180MBps every and doing what they had been designed to discontinue,” he added. “The HDDS contribute to performance and aren’t isolated within the cold tier.”

In the meantime, Panasas SSDs bring 500MBps however are centered at delivering IOPS in preference to bandwidth.

Yet any other temperature-basically based tiering downside identified by Panasas is that you just largely need your sizzling recordsdata storage tier to be as sizable as your working yelp. If it isn’t then you positively doubtlessly must take a seat down up for the cold tier.

Dynamic Knowledge Acceleration comes within the PanFS parallel file system bustle by Panasas’s ActiveStor Ultra scale-out NAS nodes. These advance with six HDD of sizes that can even simply also be specified between 4TB and 16TB with buyer-sizeable SSD, plus an NVMe tier and NVDIMM storage. Moreover the brand new tiering functionality, that configuration also introduces extra storage media selections than used to be the case beforehand.

Key use cases centered are HPC and AI/ML where workloads are anticipated to be many and different. The root is that tiering by measurement will consequence in predictable performance in some unspecified time in the future of these workloads.

Assure material Continues Below


Learn extra on Pc storage hardware

Learn Extra

Share your love