Home > Data storage: where big means nibbler

Data storage: where big means nibbler

Supplier News

Storage capacity equals performance for big miners with lots of data to back up and process quickly. Frank Noakes reports for Australian Mining

The crucial IT issue facing miners is no longer the sheer quantity of data that needs storing as the critical importance of that data, its availability and accessibility.

This is particularly the case for those large scale miners in the high performance computing (HPC) sector: organisations increasingly identified by their need to extract the most amount from their super-abundant data.

Typically this means, among other things, studying geophysical information where large pools of data are sliced and diced this way and that to find patterns that may suggest where minerals lay.

Data storage specialist Network Appliance Australian marketing director Mark Heers tells Australian Mining that HPC is a little different to normal computing.

“The most important thing for a bank or insurance company is availability of the data,” Heers says.

“If I go online to do my banking, that data must be there at that point.

“However, for those in the HPC sector, the most important thing is performance. These organisations need enough oomph to run a check over a large chunk of data that takes hours or days rather than weeks to process. They have insatiable needs for higher storage system performance.”

Heers says a speedy response means huge time savings in extracting value from the data and in time to market.

NetApp has just released the Data ONTAP GX operating system, purpose-built for HPC applications where greater throughput and flexibility is needed alongside simplicity and reliability.

“To achieve this we’ve joined a number of our storage systems together, so if more performance is needed more systems can be added, whereas traditionally with storage you have one system and keep on growing it,” Heers says.

“What normally happens with HPC models is they access the data fairly evenly; so they want to randomly access. If a miner has a huge pool of data, a scan of a mining lease for example, they may want to analyse many different areas at the same time, which accesses all different parts of the storage.

“But storage isn’t traditionally accessed like that; it is normally accessed through indexing.

“Data ONTAP GX performs over 1 million operations a second; that’s three times more than any previous HPC benchmark for storage.”

Heers says this allows users to do three times as much: more analysis or quicker completion of tasks.

“Harnessing the power of the new operating system, 24 FAS6070 nodes work together under a single namespace [a collection of names identified by a URL] to deliver 1 million SPECsfs97_R1.v3 operations a second, with a corresponding overall response time of 1.53ms,” he says.

This promises big benefits for larger mining companies who spend $100,000-plus a year on data storage or those explorers analysing large amounts of geophysical data.

The system, which connects to Linux-based clusters or Windows-based systems, also delivers a single global namespace that enables the presentation of multiple nodes to applications as a single system and the movement of data between storage nodes and/or tiers transparently.

Heers says this simplifies data retrieval and management.

“Coupled with the new FAS6070 and/or FAS3050 system, HPC users can leverage the clustered file system technology inherent in Data ONTAP GX. This allows striping of individual files or datasets across multiple nodes to achieve far grater performance than possible with a traditional storage system.

“For example, Data ONTAP GX, coupled with multiple 6070s, scale in capacity up to 6PB (6000TB),” Heers says.

Newsletter sign-up

The latest products and news delivered to your inbox