Solid-state drive

This article is about flash-based, DRAM-based, and other solid-state storage. For removable USB solid-state storage, see USB flash drive. For compact flash memory cards, see Memory card. For software-based secondary storage, see RAM drive.

"Electronic disk" redirects here. For other uses, see Electronic disk (disambiguation).

"SSD" redirects here. For other uses, see SSD (disambiguation).

Modern 2.5-inch SSD used in both laptops and desktop computers.

A rackmount SSD storage appliance based on DDR SDRAM.

PCI-attached IO Accelerator SSD.

PCIe, DRAM and NAND-based SSD. It uses an external power supply to effectively make the DRAM non-volatile.

An mSATA SSD with an external enclosure

A solid-state drive (SSD) (also known as a solid-state disk[1][2][3] though it contains no actual disk, nor a drive motor to spin a disk) is a data storage device that uses integrated circuit assemblies as memory to store data persistently. SSD technology uses electronic interfaces compatible with traditional block input/output (I/O) hard disk drives, thus permitting simple replacement in common applications.[4] Additionally, new I/O interfaces, like SATA Express, have been designed to address specific requirements of the SSD technology.

SSDs have no moving (mechanical) components. This distinguishes them from traditional electromechanical magnetic disks such as hard disk drives (HDDs) or floppy disks, which contain spinning disks and movable read/write heads.[5] Compared with electromechanical disks, SSDs are typically more resistant to physical shock, run silently, have lower access time, and less latency.[6] However, while the price of SSDs has continued to decline over time,[7] consumer-grade SSDs are still roughly six to seven times more expensive per unit of storage than consumer-grade HDDs.

As of 2014[update], most SSDs use NAND-based flash memory, which retains data without power. For applications requiring fast access, but not necessarily data persistence after power loss, SSDs may be constructed from random-access memory (RAM). Such devices may employ separate power sources, such as batteries, to maintain data after power loss.[4]

Hybrid drives or solid-state hybrid drives (SSHDs) combine the features of SSDs and HDDs in the same unit, containing a large hard disk drive and an SSD cache to improve performance of frequently accessed data.[8][9][10]



Development and history

Early SSDs using RAM and similar technology

SSDs had origins in the 1950s with two similar technologies: magnetic core memory and charged capacitor read-only storage (CCROS).[11][12] These auxiliary memory units (as contemporaries called them) emerged during the era of vacuum-tube computers. But with the introduction of cheaper drum storage units their use ceased.[13]

Later, in the 1970s and 1980s, SSDs were implemented in semiconductor memory for early supercomputers of IBM, Amdahl and Cray,[14] but they were seldom used because of their prohibitively high price. In the late 1970s, General Instruments produced an electrically alterable ROM (EAROM) which operated somewhat like the later NAND flash memory. Unfortunately, a ten-year life was not achievable and many companies abandoned the technology.[15] In 1976 Dataram started selling a product called Bulk Core, which provided up to 2 MB of solid state storage compatible with Digital Equipment Corporation (DEC) and Data General (DG) computers.[16] In 1978, Texas Memory Systems introduced a 16 kilobyte RAM solid-state drive to be used by oil companies for seismic data acquisition.[17] The following year, StorageTek developed the first RAM solid-state drive.[18]

The Sharp PC-5000, introduced in 1983, used 128-kilobyte solid-state storage cartridges containing bubble memory.[19] In 1984 Tallgrass Technologies Corporation had a tape backup unit of 40 MB with a solid state 20 MB unit built in. The 20 MB unit could be used instead of a hard drive.[20] In September 1986, Santa Clara Systems introduced BatRam, a 4 megabyte mass storage system expandable to 20 MB using 4 MB memory modules. The package included a rechargeable battery to preserve the memory chip contents when the array was not powered.[21] 1987 saw the entry of EMC Corporation (EMC) into the SSD market, with drives introduced for the mini-computer market. However, by 1993 EMC had exited the SSD market.[22][23]

Software-based RAM disks were still used as of 2009 because they are an order of magnitude faster than other technology, though they consume CPU resources and cost much more on a per-GB basis.[24]

Flash-based SSDs

In 1989, the Psion MC 400 Mobile Computer included four slots for removable storage in the form of flash-based "solid-state disk" cards, using the same type of flash memory cards as used by the Psion Series 3.[25] The flash modules did have the limitation of needing to be re-formatted entirely to reclaim space from deleted or modified files; old versions of files which were deleted or modified continued to take up space until the module was formatted.

In 1991 SanDisk Corporation created a 20 MB solid state drive (SSD) which sold for $1,000.

In 1994, STEC, Inc. bought Cirrus Logic’s flash controller operation, allowing the company to enter the flash memory business for consumer electronic devices.[26]

In 1995, M-Systems introduced flash-based solid-state drives.[27] They had the advantage of not requiring batteries to maintain the data in the memory (required by the prior volatile memory systems), but were not as fast as the DRAM-based solutions.[28] Since then, SSDs have been used successfully as HDD replacements by the military and aerospace industries, as well as for other mission-critical applications. These applications require the exceptional mean time between failures (MTBF) rates that solid-state drives achieve, by virtue of their ability to withstand extreme shock, vibration and temperature ranges.[29]

In 1999, BiTMICRO made a number of introductions and announcements about flash-based SSDs, including an 18 GB 3.5-inch SSD.[30]

In 2007, Fusion-io announced a PCIe-based SSD with 100,000 input/output operations per second (IOPS) of performance in a single card, with capacities up to 320 gigabytes.[31]

At Cebit 2009, OCZ Technology demonstrated a 1 terabyte (TB) flash SSD using a PCI Express ×8 interface. It achieved a maximum write speed of 654 megabytes per second (MB/s) and maximum read speed of 712 MB/s.[32]

In December 2009, Micron Technology announced an SSD using a 6 gigabits per second (Gbit/s) SATA interface.[33]

Enterprise flash drives

Top and bottom views of a 2.5-inch 100 GB SATA 3.0 (6 Gbit/s) model of the Intel DC S3700 series

Enterprise flash drives (EFDs) are designed for applications requiring high I/O performance (IOPS), reliability, energy efficiency and, more recently, consistent performance. In most cases, an EFD is an SSD with a higher set of specifications, compared with SSDs that would typically be used in notebook computers. The term was first used by EMC in January 2008, to help them identify SSD manufacturers who would provide products meeting these higher standards.[34] There are no standards bodies who control the definition of EFDs, so any SSD manufacturer may claim to produce EFDs when they may not actually meet the requirements.[35]

In the fourth quarter of 2012, Intel introduced its SSD DC S3700 series of drives, which focuses on achieving consistent performance, an area that had previously not received much attention but which Intel claimed was important for the enterprise market. In particular, Intel claims that at steady state the S3700 drives would not vary their IOPS by more than 10–15%, and that 99.9% of all 4 KB random IOs are serviced in less than 500 µs.[36]

Architecture and function

The key components of an SSD are the controller and the memory to store the data. The primary memory component in an SSD was traditionally DRAM volatile memory, but since 2009 it is more commonly NAND flash non-volatile memory.[4][37]


Every SSD includes a controller that incorporates the electronics that bridge the NAND memory components to the host computer. The controller is an embedded processor that executes firmware-level code and is one of the most important factors of SSD performance.[38] Some of the functions performed by the controller include:[39][40]

The performance of an SSD can scale with the number of parallel NAND flash chips used in the device. A single NAND chip is relatively slow, due to the narrow (8/16 bit) asynchronous I/O interface, and additional high latency of basic I/O operations (typical for SLC NAND, ~25 μs to fetch a 4 KB page from the array to the I/O buffer on a read, ~250 μs to commit a 4 KB page from the IO buffer to the array on a write, ~2 ms to erase a 256 KB block). When multiple NAND devices operate in parallel inside an SSD, the bandwidth scales, and the high latencies can be hidden, as long as enough outstanding operations are pending and the load is evenly distributed between devices.[41]

Micron and Intel initially made faster SSDs by implementing data striping (similar to RAID 0) and interleaving in their architecture. This enabled the creation of ultra-fast SSDs with 250 MB/s effective read/write speeds with the SATA 3 Gbit/s interface in 2009.[42] Two years later, SandForce continued to leverage this parallel flash connectivity, releasing consumer-grade SATA 6 Gbit/s SSD controllers which supported 500 MB/s read/write speeds.[43] SandForce controllers compress the data prior to sending it to the flash memory. This process may result in less writing and higher logical throughput, depending on the compressibility of the data.[44]


Flash memory-based

Comparison of architectures[45]
10× more persistent 10× more persistent
3x faster Sequential Write
same Sequential Read
4x faster Sequential Write
5x faster Sequential Read
30% more expensive 30% cheaper
The following Technologies should combine the advantages of NAND and NOR: OneNAND (Samsung), mDOC (Sandisk) and ORNAND (Spansion).

Most[citation needed] SSD manufacturers use non-volatile NAND flash memory in the construction of their SSDs because of the lower cost compared with DRAM and the ability to retain the data without a constant power supply, ensuring data persistence through sudden power outages. Flash memory SSDs are slower than DRAM solutions, and some early designs were even slower than HDDs after continued use. This problem was resolved by controllers that came out in 2009 and later.[46]

Flash memory-based solutions are typically packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch), but also in smaller unique and compact layouts made possible by the small size of flash memory.

Lower priced drives usually use multi-level cell (MLC) flash memory, which is slower and less reliable than single-level cell (SLC) flash memory.[47][48] This can be mitigated or even reversed by the internal design structure of the SSD, such as interleaving, changes to writing algorithms,[48] and higher over-provisioning (more excess capacity) with which the wear-leveling algorithms can work.[49][50][51]


See also: I-RAM and Hyperdrive (storage)

SSDs based on volatile memory such as DRAM are characterized by ultrafast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of flash SSDs or traditional HDDs. DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation (similar to the hibernate function used in modern operating systems).[28][52] SSDs of this type are usually fitted with DRAM modules of the same type used in regular PCs and servers, which can be swapped out and replaced by larger modules.[53] Such as i-RAM, HyperOs HyperDrive, DDRdrive X1, etc. Some manufacturers of DRAM SSDs solder the DRAM chips directly to the drive, and do not intend the chips to be swapped out—such as ZeusRAM, Aeon Drive, etc.[54]

A remote, indirect memory-access disk (RIndMA Disk) uses a secondary computer with a fast network or (direct) Infiniband connection to act like a RAM-based SSD, but the new, faster, flash-memory based, SSDs already available in 2009 are making this option not as cost effective.[55]

While the price of DRAM continues to fall, the price of Flash memory falls even faster. The "Flash becomes cheaper than DRAM" crossover point occurred approximately 2004.[56][57]


Some SSDs use magnetoresistive random-access memory (MRAM) for storing data.[58][59]

Some SSDs, called NVDIMM or Hyper DIMM devices, use both DRAM and flash memory. When the power goes down, the SSD copies all the data from its DRAM to flash; when the power comes back up, the SSD copies all the data from its flash to its DRAM.[60] In a somewhat similar way, some SSDs use form factors and buses actually designed for DIMM modules, while using only flash memory and making it appear as if it were DRAM. Such SSDs are usually known as UltraDIMM devices.[61]

Some drives use a hybrid of spinning disks and flash memory; such drives are known as hybrid drives and solid-state hybrid drives (SSHDs).[62][63]

Cache or buffer

A flash-based SSD typically uses a small amount of DRAM as a volatile cache, similar to the buffers in hard disk drives. A directory of block placement and wear leveling data is also kept in the cache while the drive is operating.[41] One SSD controller manufacturer, SandForce, does not use an external DRAM cache on their designs but still achieves high performance. Such an elimination of the external DRAM reduces the power consumption and enables further size reduction of SSDs.[64]

Battery or super capacitor

Another component in higher-performing SSDs is a capacitor or some form of battery. These are necessary to maintain data integrity such that the data in the cache can be flushed to the drive when power is dropped; some may even hold power long enough to maintain data in the cache until power is resumed.[64][65] In the case of MLC flash memory, a problem called lower page corruption can occur when MLC flash memory loses power while programming an upper page. The result is that data written previously and presumed safe can be corrupted if the memory is not supported by a super capacitor in the event of a sudden power loss. This problem does not exist with SLC flash memory.[40]

Most consumer-class SSDs do not have built-in batteries or capacitors;[66] among the exceptions are the Crucial M500 and MX100 series,[67] the Intel 320 series,[68] and the more expensive Intel 710 and 730 series.[69] Enterprise-class SSDs, such as the Intel DC S3700 series,[70] usually have built-in batteries or capacitors.

Host interface

The host interface is not specifically a component of the SSD, but it is a key part of the drive. The interface is usually incorporated into the controller discussed above. The interface is generally one of the interfaces found in HDDs. They include:


The size and shape of any device is largely driven by the size and shape of the components used to make that device. Traditional HDDs and optical drives are designed around the rotating platter or optical disc along with the spindle motor inside. If an SSD is made up of various interconnected integrated circuits (ICs) and an interface connector, then its shape could be virtually anything imaginable because it is no longer limited to the shape of rotating media drives. Some solid state storage solutions come in a larger chassis that may even be a rack-mount form factor with numerous SSDs inside. They would all connect to a common bus inside the chassis and connect outside the box with a single connector.[4]

For general computer use, the 2.5-inch form factor (typically found in laptops) is the most popular. For desktop computers with 3.5-inch hard disk slots, a simple adapter plate can be used to make such a disk fit. Other types of form factors are more common in enterprise applications. An SSD can also be completely integrated in the other circuitry of the device, as in the Apple MacBook Air (starting with the fall 2010 model).[74] As of 2014[update], mSATA and M.2 form factors are also gaining popularity, primarily in laptops.

Standard HDD form factors

The benefit of using a current HDD form factor would be to take advantage of the extensive infrastructure already in place to mount and connect the drives to the host system.[4][75] These traditional form factors are known by the size of the rotating media, e.g., 5.25-inch, 3.5-inch, 2.5-inch, 1.8-inch, not by the dimensions of the drive casing.[76]

Standard card form factors

Main articles: mSATA and M.2

For applications where space is at premium, like for ultrabooks or tablets, a few compact form factors were standardized for flash-based SSDs.

There is the mSATA form factor, which uses the PCI Express Mini Card physical layout. It remains electrically compatible with the PCI Express Mini Card interface specification, while requiring an additional connection to the SATA host controller through the same connector.

M.2 form factor, formerly known as the Next Generation Form Factor (NGFF), is a natural transition from the mSATA and physical layout it used, to a more usable and more advanced form factor. While mSATA took advantage of an existing form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint. The M.2 standard allows both SATA and PCI Express SSDs to be fitted onto M.2 modules.[77]

Disk-on-a-module form factors

A 2 GB disk-on-module with PATA interface

A disk-on-a-module (DOM) is a flash drive with either 40/44-pin Parallel ATA (PATA) or SATA interface, intended to be plugged directly into the motherboard and used as a computer hard disk drive (HDD). The flash-to-IDE converter simulates a HDD, so DOMs can be used without additional software support or drivers. DOMs are usually used in embedded systems, which are often deployed in harsh environments where mechanical HDDs would simply fail, or in thin clients because of small size, low power consumption and silent operation.

As of 2010[update], storage capacities range from 32 MB to 64 GB with different variations in physical layouts, including vertical or horizontal orientation.

Box form factors

Many of the DRAM-based solutions use a box that is often designed to fit in a rack-mount system. The number of DRAM components required to get sufficient capacity to store the data along with the backup power supplies requires a larger space than traditional HDD form factors.[78]

Bare-board form factors

Viking Technology SATA Cube and AMP SATA Bridge multi-layer SSDs

Viking Technology SATADIMM based SSD

MO-297 SSD form factor

A custom-connector SATA SSD

Form factors which were more common to memory modules are now being used by SSDs to take advantage of their flexibility in laying out the components. Some of these include PCIe, mini PCIe, mini-DIMM, MO-297, and many more.[79] The SATADIMM from Viking Technology uses an empty DDR3 DIMM slot on the motherboard to provide power to the SSD with a separate SATA connector to provide the data connection back to the computer. The result is an easy-to-install SSD with a capacity equal to drives that typically take a full 2.5-inch drive bay.[80] At least one manufacturer, Innodisk, has produced a drive that sits directly on the SATA connector (SATADOM) on the motherboard without any need for a power cable.[81] Some SSDs are based on the PCIe form factor and connect both the data interface and power through the PCIe connector to the host. These drives can use either direct PCIe flash controllers[82] or a PCIe-to-SATA bridge device which then connects to SATA flash controllers.[83]

Ball grid array form factors

In the early 2000s, a few companies introduced SSDs in Ball Grid Array (BGA) form factors, such as M-Systems’ (now SanDisk) DiskOnChip[84] and Silicon Storage Technology‘s NANDrive[85][86] (now produced by Greenliant Systems), and Memoright‘s M1000[87] for use in embedded systems. The main benefits of BGA SSDs are their low power consumption, small chip package size to fit into compact subsystems, and that they can be soldered directly onto a system motherboard to reduce adverse effects from vibration and shock.[88]

Comparison with other technologies

Hard disk drives

SSD benchmark, showing about 230 MB/s reading speed (blue), 210 MB/s writing speed (red) and about 0.1 ms seek time (green), all independent from the accessed disk location.

See also: Hard disk drive performance characteristics

Making a comparison between SSDs and ordinary (spinning) HDDs is difficult. Traditional HDD benchmarks tend to focus on the performance characteristics that are poor with HDDs, such as rotational latency and seek time. As SSDs do not need to spin or seek to locate data, they may prove vastly superior to HDDs in such tests. However, SSDs have challenges with mixed reads and writes, and their performance may degrade over time. SSD testing must start from the (in use) full disk, as the new and empty (fresh out of the box) disk may have much better write performance than it would show after only weeks of use.[89]

Most of the advantages of solid-state drives over traditional hard drives are due to their ability to access data completely electronically instead of electromechanically, resulting in superior transfer speeds and mechanical ruggedness.[90] On the other hand, hard disk drives offer significantly higher capacity for their price.[6][91]

Field failure rates indicate that SSDs are significantly more reliable than HDDs.[92][93][94] However, SSDs are uniquely sensitive to sudden power interruption, resulting in aborted writes or even cases of the complete loss of the drive.[95] The reliability of both HDD and SSD vary greatly amongst models.[96]

As with HDDs, there is a tradeoff between cost and performance of different SSDs. Single-level cell (SLC) SSDs, while significantly more expensive than multi-level (MLC) SSDs, offer a significant speed advantage. At the same time, DRAM-based solid-state storage is currently considered the fastest and most costly, with average response times of 10 microseconds instead of the average 100 microseconds of other SSDs. Enterprise flash devices (EFDs) are designed to handle the demands of tier-1 application with performance and response times similar to less-expensive SSDs.[97]

In traditional HDDs, a re-written file will generally occupy the same location on the disk surface as the original file, whereas in SSDs the new copy will often be written to different NAND cells for the purpose of wear leveling. The wear-leveling algorithms are complex and difficult to test exhaustively; as a result, one major cause of data loss in SSDs is firmware bugs.[98][99]

The following table shows a detailed overview of the advantages and disadvantages of both technologies. Comparisons reflect typical characteristics, and may not hold for a specific device.

Attribute or characteristic Solid-state drive Hard disk drive
Start-up time Almost instantaneous; no mechanical components to prepare. May need a few milliseconds to come out of an automatic power-saving mode. Disk spin-up may take several seconds. A system with many drives may need to stagger spin-up to limit peak power drawn, which is briefly high when an HDD is first started.[100]
Random access time[101] Typically under 0.1 ms.[102] As data can be retrieved directly from various locations of the flash memory, access time is usually not a big performance bottleneck. Ranges from 2.9 (high end server drive) to 12 ms (laptop HDD) due to the need to move the heads and wait for the data to rotate under the read/write head.[103]
Read latency time[104] Generally low because the data can be read directly from any location. In applications where hard disk seeks are the limiting factor, this results in faster boot and application launch times (see Amdahl’s law).[105] Much higher than SSDs. Read time is different for every different seek, since the location of the data on the disk and the location of the read-head make a difference.
Data transfer rate SSD technology can deliver rather consistent read/write speed, but when lots of individual smaller blocks are accessed, performance is reduced. In consumer products the maximum transfer rate typically ranges from about 100 MB/s to 600 MB/s, depending on the disk. Enterprise market offers devices with multi-gigabyte per second throughput. Once the head is positioned, when reading or writing a continuous track, an enterprise HDD can transfer data at about 140 MB/s. In practice transfer speeds are many times lower due to constant seeking, as files are read from various locations or they are fragmented. Data transfer rate depends also upon rotational speed, which can range from 4,200 to 15,000 rpm[106] and also upon the track (reading from the outer tracks is faster due higher absolute head velocity relative to the disk).
Read performance[107] Read performance does not change based on where data is stored on an SSD.[100]

Unlike mechanical hard drives, current SSD technology suffers from a performance degradation phenomenon called write amplification, where the NAND cells show a measurable drop in performance, and will continue degrading throughout the life of the SSD.[108] A technique called wear leveling is implemented to mitigate this effect, but due to the nature of the NAND chips, the drive will inevitably degrade at a noticeable rate.

If data from different areas of the platter must be accessed, as with fragmented files, response times will be increased by the need to seek each fragment.[109]
Fragmentation (filesystem specific) There is limited benefit to reading data sequentially (beyond typical FS block sizes, say 4kB), making fragmentation negligible for SSDs.[110] Defragmentation would cause wear by making additional writes of the NAND flash cells, which have a limited cycle life.[111][112] Files, particularly large ones, on HDDs usually become fragmented over time if frequently written; periodic defragmentation is required to maintain optimum performance.[113]
Noise (acoustic)[114] SSDs have no moving parts and therefore are basically silent, although electric noise from the circuits may occur. HDDs have moving parts (heads, actuator, and spindle motor) and make characteristic sounds of whirring and clicking; noise levels vary between models, but can be significant (while often much lower than the sound from the cooling fans). Laptop hard disks are relatively quiet.
Temperature control[115] SSDs usually do not require any special cooling and can tolerate higher temperatures than HDDs. High-end enterprise models installed as add-on cards may ship with heat sinks to dissipate generated heat. Ambient temperatures above 95 °F (35 °C) can shorten the life of a hard disk, and reliability will be compromised at drive temperatures above 131 °F (55 °C). Fan cooling may be required if temperatures would otherwise exceed these values.[116] In practice most hard drives are used without special arrangements for cooling.
Susceptibility to environmental factors[105][117][118] No moving parts, very resistant to shock and vibration. Heads floating above rapidly rotating platters are susceptible to shock and vibration.
Installation and mounting Not sensitive to orientation, vibration, or shock. Usually no exposed circuitry. Circuitry may be exposed, and it must not be short-circuited by conductive materials (such as the metal chassis of a computer). Should be mounted to protect against vibration and shock. Some HDDs should not be installed in a tilted position.[119]
Susceptibility to magnetic fields [120] Low impact on flash memory. But an electromagnetic pulse will damage any electrical system, especially integrated circuits. Magnets or magnetic surges could in principle damage data, although the magnetic platters are usually well-shielded inside a metal case.
Weight and size[117] Solid state drives, essentially semiconductor memory devices mounted on a circuit board, are small and light in weight. However, for easy replacement, they often follow the same form factors as HDDs (3.5-inch, 2.5-inch or 1.8-inch). Such form factors typically weigh as much as their HDD counterparts, mostly due to the enclosure. HDDs typically have the same form factor but may be heavier. This applies for 3.5-inch drives, which typically weigh around 700 grams.
Reliability and lifetime SSDs have no moving parts to fail mechanically. Each block of a flash-based SSD can only be erased (and therefore written) a limited number of times before it fails. The controllers manage this limitation so that drives can last for many years under normal use.[121][122][123][124][125] SSDs based on DRAM do not have a limited number of writes. However the failure of a controller can make a SSD unusable. Reliability varies significantly across different SSD manufacturers and models with return rates reaching 40% for specific drives.[94] As of 2011[update] leading SSDs have lower return rates than mechanical drives.[92] Many SSDs critically fail on power outages; a December 2013 survey of many SSDs found that only some of them are able to survive multiple power outages.[126] HDDs have moving parts, and are subject to potential mechanical failures from the resulting wear and tear. The storage medium itself (magnetic platter) does not essentially degrade from read and write operations.

According to a study performed by Carnegie Mellon University for both consumer and enterprise-grade HDDs, their average failure rate is 6 years, and life expectancy is 9–11 years.[127] Leading SSDs have overtaken hard disks for reliability,[92] however the risk of a sudden, catastrophic data loss can be lower for mechanical disks.[128]

When stored offline (unpowered in shelf) in long term, the magnetic medium of HDD retains data significantly longer than flash memory used in SSDs.

Secure writing limitations NAND flash memory cannot be overwritten, but has to be rewritten to previously erased blocks. If a software encryption program encrypts data already on the SSD, the overwritten data is still unsecured, unencrypted, and accessible (drive-based hardware encryption does not have this problem). Also data cannot be securely erased by overwriting the original file without special "Secure Erase" procedures built into the drive.[129] HDDs can overwrite data directly on the drive in any particular sector. However, the drive’s firmware may exchange damaged blocks with spare areas, so bits and pieces may still be present. Most HDD manufacturers offer a tool that can zero-fill all sectors, including the reallocated ones.[citation needed]
Cost per capacity SSD pricing changes rapidly; April 2013 – US$0.59/GB.[130] April 2014 – US$0.45/GB. HDDs cost about US$0.05 per GB for 3.5-inch and $0.10 per GB for 2.5-inch drives.
Storage capacity In 2013, SSDs were available in sizes up to 2 TB, but less costly 128 to 512 GB drives were more common.[131] In 2014, HDDs of up to 8 TB were available.[132]
Read/write performance symmetry Less expensive SSDs typically have write speeds significantly lower than their read speeds. Higher performing SSDs have similar read and write speeds. HDDs generally have slightly longer (worse) seek times for writing than for reading.[133]
Free block availability and TRIM SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free blocks cause slower performance.[41][134][135] HDDs are not affected by free blocks and do not benefit from TRIM.
Power consumption High performance flash-based SSDs generally require half to a third of the power of HDDs. High-performance DRAM SSDs generally require as much power as HDDs, and must be connected to power even when the rest of the system is shut down.[136][137] Emerging technologies like DevSlp can minimize power requirements of idle drives. The lowest-power HDDs (1.8-inch size) can use as little as 0.35 watts when idle.[138] 2.5-inch drives typically use 2 to 5 watts. The highest-performance 3.5-inch drives can use up to about 20 watts.

Memory cards

CompactFlash card used as an SSD

While both memory cards and most SSDs use flash memory, they serve very different markets and purposes. Each has a number of different attributes which are optimized and adjusted to best meet the needs of particular users. Some of these characteristics include power consumption, performance, size, and reliability.[139]

SSDs were originally designed for use in a computer system. The first units were intended to replace or augment hard disk drives, so the operating system recognized them as a hard drive. Originally, solid state drives were even shaped and mounted in the computer like hard drives. Later SSDs became smaller and more compact, eventually developing their own unique form factors. The SSD was designed to be installed permanently inside a computer.[139]

In contrast, memory cards (such as Secure Digital (SD), CompactFlash (CF) and many others) were originally designed for digital cameras and later found their way into cell phones, gaming devices, GPS units, etc. Most memory cards are physically smaller than SSDs, and designed to be inserted and removed repeatedly.[139] There are adapters which enable some memory cards to interface to a computer, allowing use as an SSD, but they are not intended to be the primary storage device in the computer. The typical CompactFlash card interface is three to four times slower than an SSD. As memory cards are not designed to tolerate the amount of reading and writing which occurs during typical computer use, their data may get damaged unless special procedures are taken to reduce the wear on the card to a minimum.


Until 2009, SSDs were mainly used in those aspects of mission critical applications where the speed of the storage system needed to be as high as possible. Since flash memory has become a common component of SSDs, the falling prices and increased densities have made it more cost-effective for many other applications. Organizations that can benefit from faster access of system data include equity trading companies, telecommunication corporations, streaming media and video editing firms. The list of applications which could benefit from faster storage is vast.[4]

Flash-based solid-state drives can be used to create network appliances from general-purpose personal computer hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware.[citation

SSDs based on an SD card with a live SD operating system are easily write-locked. Combined with a cloud computing environment or other writable medium, to maintain persistence, an OS booted from a write-locked SD card is robust, rugged, reliable, and impervious to permanent corruption. If the running OS degrades, simply turning the machine off and then on returns it back to its initial uncorrupted state and thus is particularly solid. The SD card installed OS does not require removal of corrupted components since it was write-locked though any written media may need to be restored.

Hard drives caching

In 2011, Intel introduced a caching mechanism for their Z68 chipset (and mobile derivatives) called Smart Response Technology, which allows a SATA SSD to be used as a cache (configurable as write-through or write-back) for a conventional, magnetic hard disk drive.[140] A similar technology is available on HighPoint‘s RocketHybrid PCIe card.[141]

Solid-state hybrid drives (SSHDs) are based on the same principle, but integrate some amount of flash memory on board of a conventional drive instead of using a separate SSD. The flash layer in these drives can be accessed independently from the magnetic storage by the host using ATA-8 commands, allowing the operating system to manage it. For example Microsoft’s ReadyDrive technology explicitly stores portions of the hibernation file in the cache of these drives when the system hibernates, making the subsequent resume faster.[142]

Dual-drive hybrid systems are combining the usage of separate SSD and HDD devices installed in the same computer, with overall performance optimization managed by the computer user, or by the computer’s operating system software. Examples of this type of system are bcache and dm-cache on Linux,[143] and Apple’s Fusion Drive.

Wear leveling

Main articles: Wear leveling and Write amplification

If a particular block was programmed and erased repeatedly without writing to any other blocks, that block would wear out before all the other blocks — thereby prematurely ending the life of the SSD. For this reason, SSD controllers use a technique called wear leveling to distribute writes as evenly as possible across all the flash blocks in the SSD.

In a perfect scenario, this would enable every block to be written to its maximum life so they all fail at the same time. Unfortunately, the process to evenly distribute writes requires data previously written and not changing (cold data) to be moved, so that data which are changing more frequently (hot data) can be written into those blocks. Each time data are relocated without being changed by the host system, this increases the write amplification and thus reduces the life of the flash memory. The key is to find an optimum algorithm which maximizes them both.[144][145]

Data recovery and secure deletion

Solid state drives have set new challenges for data recovery companies, as the way of storing data is non-linear and much more complex than that of hard disk drives. The strategy the drive operates by internally can largely vary between manufacturers, and the TRIM command zeroes the whole range of a deleted file. Wear leveling also means that the physical address of the data and the address exposed to the operating system are different.

As for secure deletion of data, using the ATA Secure Erase command is recommended, as the drive itself knows the most effective method to truly reset its data. A program such as Parted Magic can be used for this purpose.[146] In 2014, Asus was the first company to introduce a Secure Erase feature built into the UEFI of its Republic of Gamers series of PC motherboards.[147]

File systems suitable for SSDs

Main article: File systems optimized for flash memory, solid state media

Typically the same file systems used on hard disk drives can also be used on solid state disks. It is usually expected for the file system to support the TRIM command which helps the SSD to recycle discarded data. There is no need for the file system to take care of wear leveling or other flash memory characteristics, as they are handled internally by the SSD. Some flash file systems using log-based designs (F2FS, JFFS2) help to reduce write amplification on SSDs, especially in situations where only very small amounts of data are changed, such as when updating file system metadata.

While not a file system feature, operating systems must also align partitions correctly to avoid excessive read-modify-write cycles. A typical practice for personal computers is to have each partition aligned to start at a 1 MB mark, which covers all common SSD page and block size scenarios, as it is divisible by 1 MB, 512 KB, 128 KB, 4 KB and 512 bytes. Modern operating system installation software and disk tools handle this automatically.

Other features designed for hard disk drives, most notably defragmentation, are disabled in SSD installations.

Listed below are some notable computer file systems that work well with solid-states drives.

Linux systems

The ext4, Btrfs, XFS, JFS and F2FS file systems include support for the discard (TRIM) function. As of November 2013, ext4 can be recommended as a safe choice. F2FS is a modern file system optimized for flash-based storage, and from a technical perspective is a very good choice, but is still in experimental stage.

Kernel support for the TRIM operation was introduced in version 2.6.33 of the Linux kernel mainline, released on 24 February 2010.[148] To make use of it, a filesystem must be mounted using the discard parameter. Linux swap partitions are by default performing discard operations when the underlying drive supports TRIM, with the possibility to turn them off, or to select between one-time or continuous discard operations.[149][150][151]

An alternative to the kernel-level TRIM operation is to use a user-space utility called fstrim that goes through all of the unused blocks in a filesystem and dispatches TRIM commands for those areas. fstrim utility is usually run by cron as a scheduled task. As of November 2013[update], it is used by the Ubuntu Linux distribution, in which it is enabled only for Intel and Samsung solid-state drives for reliability reasons; vendor check can be disabled by editing file /etc/cron.weekly/fstrim using instructions contained within the file itself.[152]

Since 2010, standard Linux disk utilities have taken care of appropriate partition alignment by default.[153]

Performance considerations

During installation, Linux distributions usually do not configure the installed system to use TRIM and thus the /etc/fstab file requires manual modifications.[154] This is because of the notion that the current Linux TRIM command implementation might not be optimal.[155] It has been proven to cause a performance degradation instead of a performance increase under certain circumstances.[156][157] As of January 2014[update], Linux sends individual TRIM commands, instead of vectorized lists of TRIM ranges as recommended by the TRIM specification.[158] This problem has existed for years and it is not known when the Linux TRIM strategy will be reworked to fix the issue.

For performance reasons, it is recommended to switch the I/O scheduler from the default CFQ (Completely Fair Queuing) to Noop or Deadline. CFQ was designed for traditional magnetic media and seek optimizations, thus many of those I/O scheduling efforts are wasted when used with SSDs. As part of their designs, SSDs are offering much bigger levels of parallelism for I/O operations, so it is preferable to leave scheduling decisions to their internal logic – especially for high-end SSDs.[159][160]

A scalable block layer for high-performance SSD storage, developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVM Express, by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization. As of version 3.18 of the Linux kernel, released on 7 December 2014, VirtIO block driver and the SCSI layer (which is used by Serial ATA drivers) have been modified to actually use this new interface; other drivers will be ported in the following releases.[161][162][163][164]

Mac OS X

Mac OS X versions since 10.6.8 (Snow Leopard) support TRIM but only when used with an Apple-purchased SSD.[165] There is also a technique to enable TRIM in earlier versions, though it is uncertain whether TRIM is utilized properly if enabled in versions before 10.6.8.[166] TRIM is generally not automatically enabled for third-party drives, although it can be enabled by using third-party utilities such as Trim Enabler. The status of TRIM can be checked in the System Information application or in the system_profiler command-line tool.

Microsoft Windows

Versions of Microsoft Windows prior to 7 do not take any special measures to support solid state drives. Starting from Windows 7, the standard NTFS file system provides TRIM support (other file systems do not support TRIM[167]).

By default, Windows 7, 8, and 8.1 execute TRIM commands automatically if the device is detected to be a solid-state drive. To change this behavior, in the registry key HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem the value DisableDeleteNotification can be set to 1 to prevent the mass storage driver from issuing the TRIM command. This can be useful in situations where data recovery is preferred over wear leveling (in most cases, TRIM irreversibly resets all freed space).[168]

Windows implements TRIM command for more than just file delete operations. The TRIM operation is fully integrated with partition- and volume-level commands like format and delete, with file system commands relating to truncate and compression, and with the System Restore (also known as Volume Snapshot) feature.[169]

Windows 7, 8, and 8.1

Windows 7 has native support for SSDs,[170][171] with similar level of support in Windows 8 and 8.1. The operating system detects the presence of an SSD and optimizes operation accordingly. For SSD devices Windows disables defragmentation, SuperFetch and ReadyBoost, the latter two being boot-time and application prefetching operations. It also includes support for the TRIM command to reduce garbage collection for data which the operating system has already determined is no longer valid. Without support for TRIM, the SSD would be unaware of this data being invalid and would unnecessarily continue to rewrite it during garbage collection causing further wear on the SSD.[172][173]

Windows Vista

Windows Vista generally expects hard disk drives rather than SSDs.[174][175] Windows Vista includes ReadyBoost to exploit characteristics of USB-connected flash devices, but for SSDs it only improves the default partition alignment to prevent read-modify-write operations that reduce the speed of SSDs. Most SSDs are typically split into 4 kB sectors, while most systems are based on 512 byte sectors with their default partition setups unaligned to the 4 KB boundaries.[176] The proper alignment does not help the SSD’s endurance over the life of the drive; however, some Vista operations, if not disabled, can shorten the life of the SSD.

Disk defragmentation should be disabled because the location of the file components on an SSD doesn’t significantly impact its performance, but moving the files to make them contiguous using the Windows Defrag routine will cause unnecessary write wear on the limited number of P/E cycles on the SSD. The Superfetch feature will not materially improve the performance of the system and causes additional overhead in the system and SSD, although it does not cause wear.[177] There is no official information to confirm whether Windows Vista sends TRIM commands to a solid state drive.


Solaris as of version 10 Update 6 (released in October 2008), and recent versions of OpenSolaris, Solaris Express Community Edition, Illumos, Linux with ZFS on Linux and FreeBSD all can use SSDs as a performance booster for ZFS. A low-latency SSD can be used for the ZFS Intent Log (ZIL), where it is named the SLOG. This is used every time a synchronous write to the disk occurs. An SSD (not necessarily with a low-latency) may also be used for the level 2 Adaptive Replacement Cache (L2ARC), which is used to cache data for reading. When used either alone or in combination, large increases in performance are generally seen.[178]


ZFS for FreeBSD introduced support for TRIM on September 23, 2012.[179] The code builds a map of regions of data that were freed; on every write the code consults the map and eventually removes ranges that were freed before, but are now overwritten. There is a low-priority thread that TRIMs ranges when the time comes.

Also the Unix File System (UFS) supports the TRIM command.[180]

Swap partitions

  • According to Microsoft’s former Windows division president Steven Sinofsky, "there are few files better than the pagefile to place on an SSD".[181] According to collected telemetry data, Microsoft had found the pagefile.sys to be an ideal match for SSD storage.[181]
  • Linux swap partitions are by default performing TRIM operations when the underlying block device supports TRIM, with the possibility to turn them off, or to select between one-time or continuous TRIM operations.[149][150][151]
  • If an operating system does not support using TRIM on discrete swap partitions, it might be possible to use swap files inside an ordinary file system instead. For example, OS X does not support swap partitions; it only swaps to files within a file system, so it can use TRIM when, for example, swap files are deleted.
  • DragonFly BSD allows SSD-configured swap to also be used as file system cache.[182] This can be used to boost performance on both desktop and server workloads. The bcache, dm-cache and Flashcache projects provide a similar concept for the Linux kernel.[183]

Standardization organizations

The following are noted standardization organizations and bodies that work to create standards for solid-state drives (and other computer storage devices). The table below also includes organizations which promote the use of solid-state drives. This is not necessarily an exhaustive list.

Organization or Committee Subcommittee of: Purpose
INCITS N/A Coordinates technical standards activity between ANSI in the USA and joint ISO/IEC committees worldwide
JEDEC N/A Develops open standards and publications for the microelectronics industry
JC-64.8 JEDEC Focuses on solid-state drive standards and publications
NVMHCI N/A Provides standard software and hardware programming interfaces for nonvolatile memory subsystems
SATA-IO N/A Provides the industry with guidance and support for implementing the SATA specification
SFF Committee N/A Works on storage industry standards needing attention when not addressed by other standards committees
SNIA N/A Develops and promotes standards, technologies, and educational services in the management of information
SSSI SNIA Fosters the growth and success of solid state storage



Solid-state drive technology has been marketed to the military and niche industrial markets since the mid-1990s.[184]

Along with the emerging enterprise market, SSDs have been appearing in ultra-mobile PCs and a few lightweight laptop systems, adding significantly to the price of the laptop, depending on the capacity, form factor and transfer speeds. For low-end applications, a USB flash drive may be obtainable for anywhere from $10 to $100 or so, depending on capacity and speed; alternatively, a CompactFlash card may be paired with a CF-to-IDE or CF-to-SATA converter at a similar cost. Either of these requires that write-cycle endurance issues be managed, either by refraining from storing frequently written files on the drive or by using a flash file system. Standard CompactFlash cards usually have write speeds of 7 to 15 MB/s while the more expensive upmarket cards claim speeds of up to 60 MB/s.

One of the first mainstream releases of SSD was the XO Laptop, built as part of the One Laptop Per Child project. Mass production of these computers, built for children in developing countries, began in December 2007. These machines use 1,024 MiB SLC NAND flash as primary storage which is considered more suitable for the harsher than normal conditions in which they are expected to be used. Dell began shipping ultra-portable laptops with SanDisk SSDs on April 26, 2007.[185] Asus released the Eee PC subnotebook on October 16, 2007, with 2, 4 or 8 gigabytes of flash memory.[186] On January 31, 2008, Apple released the MacBook Air, a thin laptop with an optional 64 GB SSD. The Apple Store cost was $999 more for this option, as compared with that of an 80 GB 4200 RPM hard disk drive.[187] Another option, the Lenovo ThinkPad X300 with a 64 gigabyte SSD, was announced by Lenovo in February 2008.[188] On August 26, 2008, Lenovo released ThinkPad X301 with 128 GB SSD option which adds approximately $200 US.[189]

Some Mtron solid-state drives

In 2008 low end netbooks appeared with SSDs. In 2009 SSDs began to appear in laptops.[185][187]

On January 14, 2008, EMC Corporation (EMC) became the first enterprise storage vendor to ship flash-based SSDs into its product portfolio when it announced it had selected STEC, Inc.‘s Zeus-IOPS SSDs for its Symmetrix DMX systems.[190]

In 2008 Sun released the Sun Storage 7000 Unified Storage Systems (codenamed Amber Road), which use both solid state drives and conventional hard drives to take advantage of the speed offered by SSDs and the economy and capacity offered by conventional hard disks.[191]

Dell began to offer optional 256 GB solid state drives on select notebook models in January 2009.[192][193]

In May 2009, Toshiba launched a laptop with a 512 GB SSD.[194][195]

Since October 2010, Apple’s MacBook Air line has used a solid state drive as standard.[196]

In December 2010, OCZ RevoDrive X2 PCIe SSD was available in 100 GB to 960 GB capacities delivering speeds over 740 MB/s sequential speeds and random small file writes up to 120,000 IOPS.[197]

In November 2010, Fusion-io released its highest performing SSD drive named ioDrive Octal utilising PCI-Express x16 Gen 2.0 interface with storage space of 5.12 TB, read speed of 6.0 GB/s, write speed of 4.4 GB/s and a low latency of 30 microseconds. It has 1.19 M Read 512 byte IOPS and 1.18 M Write 512 byte IOPS.[198]

In 2011, computers based on Intel’s Ultrabook specifications became available. These specifications dictate that Ultrabooks use an SSD. These are consumer-level devices (unlike many previous flash offerings aimed at enterprise users), and represent the first widely available consumer computers using SSDs aside from the MacBook Air.[199]

At CES 2012, OCZ Technology demonstrated the R4 CloudServ PCIe SSDs capable of reaching transfer speeds of 6.5 GB/s and 1.4 million IOPS.[200] Also announced was the Z-Drive R5 which is available in capacities up to 12 TB, capable of reaching transfer speeds of 7.2 GB/s and 2.52 million IOPS using the PCI Express x16 Gen 3.0.[201]

In December 2013, Samsung introduced and launched the industry’s first 1 TB mSATA SSD.[202]

Quality and performance

Main article: Disk drive performance characteristics

SSD technology has been developing rapidly. Most of the performance measurements used on disk drives with rotating media are also used on SSDs. Performance of flash-based SSDs is difficult to benchmark because of the wide range of possible conditions. In a test performed in 2010 by Xssist, using IOmeter, 4 kB random 70% read/30% write, queue depth 4, the IOPS delivered by the Intel X25-E 64 GB G1 started around 10,000 IOPs, and dropped sharply after 8 minutes to 4,000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3,000 to 4,000 from around 50 minutes onwards for the rest of the 8+ hour test run.[203]

Write amplification is the major reason for the change in performance of an SSD over time. Designers of enterprise-grade drives try to avoid this performance variation by increasing over-provisioning, and by employing wear-leveling algorithms that move data only when the drives are not heavily utilized.[204]


SSD shipments were 11 million units in 2009,[205] 17.3 million units in 2011[206] for a total of US$5 billion,[207] 39 million units in 2012, and are expected to rise to 83 million units in 2013[208] to 201.4 million units in 2016[206] and to 227 million units in 2017.[209]

Revenues for the SSD market (including low-cost PC solutions) worldwide totalled $585 million in 2008, rising over 100% from $259 million in 2007.[210]


Gartner Critical Capabilities for Scale-Out File System Storage – 27 January 2015


Analyst(s): Arun Chandrasekaran, Santhosh Rao


Analytics, collaboration and cost-effective data retention are key imperatives that are driving interest in scale-out file system storage for I&O leaders. This research compares nine scale-out file system storage products on their capability to support key uses cases via seven critical capabilities.



Key Findings

  • The products in this research are sufficiently differentiated from each other by use case or specific capabilities unique to each product to make them appropriate for purchase.
  • Scale-out file system storage products face competition from object storage products, due to object storage’s better scalability, easier management and robust multitenancy, as well as from traditional, scale-up network-attached storage products, due to NAS’s growing capacity and better interoperability.
  • Although nascent, cloud-based deployments of scale-out file system products are expected to challenge the growth of on-premises deployments due to the promise of low entry costs from public cloud infrastructure as a service, rapid scalability and a growing ecosystem of independent software vendors.
  • Big data analytics and cloud storage are emerging use cases for scale-out file system storage.


  • Focus on the workload characteristics by understanding storage needs across the critical capabilities, so that the appropriate product can be implemented to meet workload requirements.
  • Validate performance claims with proofs of concept, given that performance varies greatly by protocol type and file sizes.
  • Evaluate scale-out file system storage products for their interoperability with the ISV solutions that are dominant in your environment and for their support of public cloud IaaS.
  • Include an adequate training budget during procurement, because managing scale-out file system storage differs from storage area network management; as a result, storage administrators may need more training.

What You Need to Know

The growing demand for storage products that can scale linearly in capacity and performance to manage unstructured data is propelling scale-out file system products to the forefront for use cases such as high-performance computing (HPC), file sharing, backup and archiving.

In this research, we rate nine scale-out file system storage products on their ability to support four use cases by means of those products’ capabilities, which are critical to those cases. As revealed by the analysis in this research, the evaluated products, for the most part, vary greatly in their architecture, capabilities and alignment with the aforementioned use cases. Although many vendors continue to fine-tune their products to focus on specific use cases, the leading vendors in this research cater to a wide variety of use cases in enterprise environments.

I&O leaders must carefully select a scale-out file system storage product through a rigorous planning process that involves thoroughly evaluating the products’ critical capabilities. In addition, because awareness of scale-out file system storage and global namespaces is uncommon in enterprise IT organizations, I&O leaders should allocate a portion of the scale-out file system storage budget to training on the technology.


Critical Capabilities Use-Case Graphics

Figure 1. Vendors’ Product Scores for the Overall Use Case

Source: Gartner (January 2015)

Figure 2. Vendors’ Product Scores for the Commercial HPC Use Case

Source: Gartner (January 2015)

Figure 3. Vendors’ Product Scores for the Large Home Directories Use Case

Source: Gartner (January 2015)

Figure 4. Vendors’ Product Scores for the Backup Use Case

Source: Gartner (January 2015)

Figure 5. Vendors’ Product Scores for the Archiving Use Case

Source: Gartner (January 2015)


Dell Fluid File System

Dell Fluid File System (FluidFS) is based on the Exanet assets, which Dell acquired at the end of 2009. FluidFS supports several Dell storage arrays in the back end, including EqualLogic and Compellent. In the Version 3 release of FluidFS, which came out in June 2013, Dell added several new features, such as data reduction (e.g., deduplication and compression), NFSv4, 10GbE support and a unified management interface with Compellent Enterprise Manager. Many of the capabilities evaluated in this research, such as capacity, performance and resiliency, vary depending on FluidFS’s back-end storage arrays.

For example, the EqualLogic solution is designed to provide an easy-to-use solution to small and midsize businesses, while the Compellent solution targets performance-oriented deployments. FluidFS has a scale-out architecture based on high-availability, active-active pairs and stripes metadata and data across nodes in the cluster for performance and data protection. Although the Version 3 release was a significant improvement over the prior version, FluidFS still does not support multitenancy or WORM (see Note 1). It also lacks native tiering and only supports Server Message Block (SMB) v.2.0, although Dell has indicated availability of SMB 3.0 support in 1Q15.

EMC Isilon

Among the distributed file systems for scalable capacity and performance on the market, Isilon stands out, with its easy-to-deploy clustered storage appliance approach and well-rounded feature sets. The product includes a tightly integrated file system, volume manager and data protection in one software layer; a clusterwide snapshot capability at a granular level; asynchronous replication; high availability with multiple failover nodes; fast disk rebuild time; and a policy-based migration tool. In 2014, EMC added features such as postprocess deduplication and SMB 3.0 multichannel support.

From a performance standpoint, Isilon backup processes can be accelerated by adding the A100 performance accelerator node. From a security standpoint, Isilon has native encryption for data at rest, and SmartLock provides WORM capabilities that meet compliance requirements, such as SEC 17a-4 and HIPAA. Isilon also supports Hadoop deployments by uniquely supporting HDFS as a protocol. Isilon does not support compression, and geographically distributed deployments of Isilon can be complex and expensive to manage, due to the replication overhead and the lack of dispersed erasure coding.

Hitachi NAS Platform

The Hitachi NAS (HNAS) series is based on the SiliconFS object-based file system. In 2014, Hitachi introduced support for object-based replication, hardware-accelerated deduplication and automated tiering to the cloud via AWS S3 API. The HNAS deduplication engine is unique in that it executes at the hardware level, using field-programmable gate array (FPGA), thus relieving the system CPU and memory of this process. The HNAS deduplication engine automatically throttles based on workload levels. HNAS also introduced data migration tools that decrease data migration time by automatically assessing file data on third-party NFS servers and setting up associations. HNAS’s file-tiering capabilities include a built-in policy manager and a mechanism to automatically place the metadata table in the fastest tier to increase directory search speeds.

HNAS integrates with a wide variety of backup and archiving independent software vendors (ISVs). Although HNAS 4000 series is rated high on performance, it can only scale up to eight nodes and lacks compression support. HNAS lacks support for SMB 3.0 protocol, which can affect its availability in Windows environments, because it won’t be able to handle transparent failovers.

HP StoreAll Storage

HP StoreAll Storage is based on the Ibrix parallel file system and has unique features such as HP Labs’ StoreAll Express Query, which can perform extremely fast metadata searches of massive content repositories. HP StoreAll supports up to 16PB within a single namespace. Automated, policy-based data tiering is a standard feature in the product, and integration with tools such as HP Systems Insight Manager (SIM) and HP Storage Essentials simplifies manageability. The StoreAll series has native retention, WORM and auditing features, OpenStack Swift API support and broad support for archiving ISVs, making it an attractive product for petabyte (PB)-scale archiving. HP also packages all hardware and software components in a single, unified pricing scheme.

HP StoreAll has only modest efficiency features and relies on back-end storage arrays for thin provisioning. In addition, the product lacks deduplication and compression and HP has few public performance benchmarks for the product.

Huawei OceanStor 9000

Huawei offers two clustered network-attached storage (NAS) products: OceanStor N8000 series and the OceanStor 9000 series. Gartner evaluated the latter in this research. OceanStor 9000 is based on Huawei’s proprietary Wushan file system and can scale out to 288 nodes, which is one of the highest among scale-out file system storage products. Huawei OceanStor 9000 also supports NFS, SMB and InfiniBand protocols; Amazon Web Services S3 API on the front-end; and an HDFS plug-in. To further stimulate the demand for its product, Huawei has been aggressive in submitting it to publicly available performance benchmarks, such as the Standard Performance Evaluation Corp. (SPEC).

OceanStor 9000 has a resilient architecture and supports erasure coding and internode balancing; however, it lacks efficiency features, such as deduplication and compression. Huawei’s service, support and reseller network continues to be weak for this product line outside China.

IBM Elastic Storage

IBM’s Elastic Storage is a software-only platform based on the mature and scalable General Parallel File System (GPFS). Elastic Storage supports object access, file sharing, virtualization and analytics on a single converged platform. It is closely integrated with IBM’s FileNet for content management. The product scored highly on the scalability and performance capabilities. In 2014, IBM made a number of enhancements to Elastic Storage, adding an interface for OpenStack SWIFT (for object storage), as well as adding file encryption and NFSv4 support. Elastic Storage also includes a rich replication feature that supports two-way, three-way and metadata-level replication of individual files or the entire file system.

Elastic Storage is widely deployed for HPC and archive use cases, with actual deployments exceeding 10PB production capacities in some cases. However, Elastic Storage lacks features such as built-in deduplication, compression and thin provisioning. Although IBM has made improvements by modeling the graphical user interface (GUI) after the popular XIV interface, overall manageability continues to be complex.

NetApp Clustered Data Ontap 8.x

Clustered Data Ontap is a unified storage OS from NetApp, and this research evaluates Clustered Data Ontap v.8.x, which adds a global namespace, load balancing capabilities and federated management to the feature set that has made its nonclustered file systems popular. Clustered Data Ontap can support as many as 12 failover node pairs, which can scale to more than 100PB. In addition, the product enables user-transparent migration among different node pairs to perform load balancing, easing management complexities with high availability in a large environment. NetApp has been in a market-leading position in consolidating Windows and Unix/Linux file servers for home directories, and Clustered Data Ontap brings its NFS and SMB (v3.0) support into a more scalable environment.

The latest release of Clustered Data Ontap (v.8.3), which launched in October 2014, nearly brought the product to feature parity with the traditional seven-mode, including the addition of MetroCluster. The v.8.3 release does not include support for seven-mode, clearly signaling NetApp’s intention to focus its innovation on the Clustered Data Ontap architecture moving forward. With regard to the critical capabilities covered in this research, Clustered Data Ontap is highly rated for its storage efficiency, as well as interoperability, due to robust thin provisioning, data reduction and caching capabilities, as well as its tight integration with leading ISV products. Data Ontap 8.3 does not support SnapLock for WORM capabilities, lacks a parallel file system and involves a complex migration process for most seven-mode customers.

Quantum StorNext

Quantum is an established producer of data protection and data management products and is especially known for its disk backup appliances and tape libraries. Quantum’s StorNext scale-out file system offering is purpose-built to address the high-performance streaming of rich media, cross-OS file sharing and long-term archiving in industries such as life sciences, energy, media and entertainment, and government. In the past year, Quantum has enhanced StorNext, giving it the ability to handle bigger datasets and more IP-network-centric workloads, and to embed more-flexible, automated storage tiering.

In August 2014, Quantum introduced StorNext Connect, enabling easier deployment and operational management of multiple StorNext systems. StorNext is available as a software-only solution and as an appliance with dedicated hardware for metadata controllers, NAS gateways and archival storage. The product has tight integration with tape and Quantum’s object storage, and takes advantage of policy-based tiering to lower the total cost of ownership (TCO). Although StorNext is rated well for its performance, it lacks thin provisioning and snapshots. The product line remains niche, lacking broad appeal across vertical industries and use cases.

Red Hat Storage Server

The acquisition of Gluster by Red Hat in 4Q11 was beneficial for Gluster, since it brought backing from a pioneer in open-source software. Since then, Red Hat has relaunched Gluster’s open-source storage product, GlusterFS, with more stability, better features and additional prepackaged software. The product is a scale-out, multiprotocol (NFS, RESTful APIs, Server Message Block [SMB]), open-source storage software solution with PB-scale capacity and improved snapshot and replication capabilities. Red Hat Storage Server is a preintegrated software product consisting of Red Hat Enterprise Linux (RHEL), GlusterFS and the extensible file system (XFS) and is installed on bare-metal hardware or can be installed in a kernel-based virtual machine (KVM) or VMware hypervisors to pool storage resources.

The product benefits from Red Hat’s complementary open-source community projects and technical support capabilities, which include community, standard and premium support options. However, the product lacks some capabilities that enterprise IT buyers aspire to in a file system product, such as tiering, and native data reduction features, such as compression and deduplication. The IHV ecosystem supporting the product is small, but growing.


Traditionally, the major market for scale-out file system storage has been academic and commercial HPC environments for workloads such as genomic sequencing, financial modeling, 3D animation, weather forecasting and seismic analysis. As such, scale-out file system storage solutions have focused on scalable capacity, raw computing power and aggregated bandwidth, with data protection, security and efficiency only as secondary considerations.

However, ever-increasing data growth — chiefly, unstructured data growth — in the enterprise has led many I&O leaders in these organizations to deploy the technology to support large home directories, backup and archiving. For these use cases, better security and multitenancy, easier manageability, robust data protection and ISV interoperability are growing in importance.

In addition to simply supporting these four use cases (academic HPC not included), I&O leaders are embracing scale-out file system storage for its added benefits. First and foremost, the technology includes embedded functionality for storage management, resiliency and security at the software level, easing the tasks related to those functions in the I&O organization. The technology also offers nearly linear horizontal scaling and delivers highly aggregated performance through parallelism. This means that scale-out file system storage enables pay-as-you-grow storage capacity and performance, making it a cost-effective alternative to scale-up storage, in particular, where I&O leaders are forced to purchase more storage than needed to ensure storage growth does not outpace capacity. Lastly, most scale-out file system storage vendors use standard x86 hardware, thus reducing the hardware acquisition costs.

Big data analytics, a scenario in which these file systems could run map/reduce processing jobs, and cloud storage for file sync and share and other SaaS workloads are emerging use cases for scale-out file system storage products.

Product/Service Class Definition

Scale-out file system storage is a category of storage product that use a global namespace to aggregate a loose file cluster that resides across distributed storage modules or nodes. In a scale-out file system storage environment, capacity, performance, throughput and connectivity scale with the number of nodes in the system. That being said, scalability is often limited by storage hardware and networking architectural constraints.

Critical Capabilities Definition


The ability of the product to support growth in storage capacity in a nearly linear manner with capacity requirements often extending from hundreds of TBs to the PB scale.

Scoring for this capability takes into consideration the scalability limitations of a product’s file system capacity, in theory and in real-world practice. Scalability limitations include maximum storage capacity, the number of files/directories/user connections supported, and the number of nodes and disk drives supported by a file system, volume or namespace.

Storage Efficiency

The ability of the product to support storage efficiency technologies, such as compression, deduplication, thin provisioning and automated tiering to reduce TCO.

Scoring for this capability takes into consideration data reduction ratios, performance impact of data reduction and granularity and application transparency of tiering algorithms.


The ability of the product to support third-party ISV applications, public cloud APIs and multivendor hypervisors.

Scoring for this capability takes into consideration the breadth and depth of ISV/independent hardware vendor (IHV) support, integration with common hypervisor and cloud APIs, flexible deployment models and support for various access protocols.


The ability of the product to support automation, management and monitoring and provide reporting tools.

Reporting tools and programs can include single-pane management consoles, monitoring and reporting tools designed to help support storage team members to seamlessly manage systems, monitor system usage and efficiencies, and anticipate and correct system alarms and fault conditions before or soon after they occur.


The aggregated IOPS, bandwidth and low latency that can be delivered by the cluster functioning at maximum specifications and observed in real-world configurations.

Scoring for this capability takes into consideration real-world implementations, as well as publicly available performance benchmarks, such as SPEC.


The ability of the product to provision a high level of system availability and data protection.

Resiliency features contributing to this capability include high tolerance for simultaneous disk and/or node failures, fault isolation techniques, built-in protection against data corruption and other techniques (such as snapshots and replication) to meet customers’ recovery point objectives (RPOs) and recovery time objectives (RTOs).

Security and Multitenancy

The depth and breadth of a product’s native security and multitenancy features, including granular access control, user-driven encryption, malware protection and data immutability.

Scoring was based on granularity of multitenancy settings, depth of data-at-rest encryption capabilities, integration with Lightweight Directory Access Protocol (LDAP)/Active Directory systems with user mapping, role-based access control and WORM capabilities for governance and compliance.

Use Cases


This is the general rating for the scale-out file system storage.


In this use case, an enterprise uses scale-out file system storage to meet the requirements of long-term data retention.

Scale-out file system products have been used as an archiving target for regulatory and cost optimization reasons. Security features that can guarantee data immutability (such as WORM), capacity scalability and resiliency are highly weighted for this use case.


Enterprises use scale-out file system storage to meet the requirements of large-scale, disk-based backup for low RTOs and RPOs.

I&O leaders have used scale-out file system storage as a backup target for years. This is because scale-out file system storage provides added scalability for large backup datasets to meet increasing demands for disk-based backup. Resiliency, storage efficiency and interoperability with a variety of backup ISVs are important selection considerations, and are heavily weighted.

Commercial HPC

In this use case, an enterprise uses scale-out file system storage to provide high throughput and parallel read-and-write access to large volumes of data.

Commercial HPC is the most prominent use case for scale-out file system storage and most scale-out file system storage products are built to address it. Because they are the most important factors in choosing a product for commercial HPC, performance, capacity and resiliency are weighted heavily on this use case.

Large Home Directories

Enterprises use scale-out file system storage to support large home directories, as they would with scale-up NAS, only on a larger scale.

In environments characterized by file server sprawl, scale-out file system storage simplifies storage management by eliminating physical, client-to-server mappings through global namespaces, making it an ideal platform to perform tasks such as automated storage tiering and user-transparent data migration. Scale-out file system storage’s ability to provide operational simplicity and enable linear scalability also makes it particularly useful for consolidating file server or NAS filer sprawl. Resiliency, storage efficiency, and performance are weighted heavily in this use case.

Vendors Added and Dropped


Huawei: In the 2013 release of this Critical Capabilities, Huawei did not yet meet our inclusion criteria, because the company did not have at least 10 customers with 300TBs or more in production and/or did not have a fully owned product. However, the company now meets these criteria, along with the other requirements for inclusion.


Nexenta has been excluded from this study due to its lack of support for namespace cluster plug-in since the 4.0 release of NexentaStor.

Inclusion Criteria

The products covered in this research include scalable file system storage offerings with a sizable footprint in the market. In this research, we define scalable file system storage as storage that allows for (at a minimum):

  • 100TB per file system
  • 1PB per namespace, which can span two nodes or more

To be included in this research, scale-out file system storage products need:

  • At least 10 production customers — all with at least 300TB residing on the product
  • Support for horizontal scaling of drive capacity and throughput in a cluster mode or in independent node additions with a global namespace
  • The ability to support all four uses cases in this research
  • Three or more vendor-provided customer references for the product
  • A deployment in at least two major global geographies (e.g., North America, EMEA, Latin America or the Asia/Pacific region)

Vendors, such as Intel, Panasas and DataDirect Networks, that focus on the technical computing (academic HPC) market and/or don’t cater to all the use cases outlined in this document are excluded from this study.

Table 1. Weighting for Critical Capabilities in Use Cases
Critical Capabilities Overall Archiving Backup Commercial HPC Large Home Directories
Capacity 15% 20% 12% 18% 10%
Storage Efficiency 13% 8% 20% 3% 20%
Interoperability 9% 10% 15% 7% 6%
Manageability 11% 12% 8% 12% 12%
Performance 20% 10% 15% 40% 15%
Resiliency 21% 18% 25% 15% 25%
Security and Multitenancy 11% 22% 5% 5% 12%
Total 100% 100% 100% 100% 100%
As of January 2015

Source: Gartner (January 2015)

Critical Capabilities Rating

Table 2. Product/Service Rating on Critical Capabilities
Product or Service Ratings Dell Fluid File System EMC Isilon Hitachi NAS Platform HP StoreAll Storage Huawei OceanStor 9000 IBM Elastic Storage NetApp Clustered Data Ontap 8.x Quantum StorNext Red Hat Storage Server
Capacity 3.5 4.3 3.8 3.9 4.2 4.8 4.2 3.7 3.3
Storage Efficiency 3.1 3.5 3.5 2.7 2.9 2.8 4.3 3.1 2.1
Interoperability 3.4 4.2 3.7 3.8 3.2 3.8 4.7 2.6 3.5
Manageability 3.0 4.3 3.6 3.7 3.0 3.7 4.1 3.4 2.7
Performance 3.6 4.1 3.9 2.6 4.1 4.3 3.9 4.1 2.8
Resiliency 3.5 4.3 3.8 3.9 3.4 4.1 4.2 3.2 3.7
Security and Multitenancy 2.9 4.5 3.9 4.2 3.2 3.7 3.8 3.0 3.3
As of January 2015

Source: Gartner (January 2015)

Table 3 shows the product/service scores for each use case. The scores, which are generated by multiplying the use case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 3. Product Score in Use Cases
Use Cases Dell Fluid File System EMC Isilon Hitachi NAS Platform HP StoreAll Storage Huawei OceanStor 9000 IBM Elastic Storage NetApp Clustered Data Ontap 8.x Quantum StorNext Red Hat Storage Server
Overall 3.34 4.17 3.76 3.49 3.51 3.96 4.14 3.39 3.08
Archiving 3.28 4.25 3.77 3.71 3.48 3.99 4.13 3.30 3.17
Backup 3.35 4.11 3.73 3.45 3.43 3.86 4.22 3.29 3.07
Commercial HPC 3.43 4.20 3.81 3.33 3.74 4.18 4.09 3.62 3.07
Large Home Directories 3.30 4.13 3.74 3.47 3.40 3.83 4.15 3.33 3.03
As of January 2015

Source: Gartner (January 2015)

To determine an overall score for each product/service in the use cases, multiply the ratings in Table 2 by the weightings shown in Table 1.

Gartner Critical Capabilities for General-Purpose, Midrange Storage Arrays – 20 November 2014


Analyst(s): Stanley Zaffos, Valdis Filks, Arun Chandrasekaran


I&O leaders and storage architects can improve their storage infrastructures’ agility and reduce costs by mapping application needs to storage array capabilities. This research quantifies eight critical measures of product attractiveness across six high-impact, midrange storage array use cases.



Key Findings

  • Traditional dual-controller architectures will continue to dominate the midrange storage market during the next three to five years, even as new scale-up, scale-out, flash and hybrid storage arrays compete for market share.
  • Server virtualization, desktop virtualization, big data analytics and cloud storage are reprioritizing the traditional metrics of product attractiveness.
  • The compression of product differentiation among various vendor offerings and the availability of easy-to-use migration tools are diminishing the strength of vendor lock-ins.
  • Security and concerns with migration and conversion costs among competing storage vendors’ arrays are declining in importance relative to vendor reputation, support capabilities, performance, reliability and scalability.


  • Take a top-down design approach to infrastructure design that identifies high-impact workloads, conducts workload profiling, sets service-level objectives, quantifies future growth rates, and examines the impact on contracts with storage service and disaster recovery providers.
  • Focus on externally visible measures of product attractiveness, such as input/output operations per second, throughput and response times, rather than configuration differences in cache, solid-state drives or hard-disk drive geometries, when choosing a storage solution.
  • Build a cross-functional team that includes users, developers, operations, finance, legal and senior management to provide greater insight into planned application deployments and changes in business needs, and to unmask any stakeholders’ hidden agendas, such as an unwillingness to give up budget or control over the arrays that support their workloads.
  • Conduct a what-if analysis to determine how changes in organizational data growth rates and in planned service lives affects the attractiveness of various shortlisted solutions.

What You Need to Know

With spending on storage growing faster than IT budgets, overdelivering against application needs with respect to availability, performance and data protection is a luxury that most IT organizations can no longer afford. The ability to build agile, manageable and cost-effective storage infrastructures will depend on the creation of methodologies that stack-rank vendors, storage arrays and bids in their environments.

Few "bad" storage arrays are being sold, and none of the 16 arrays we have selected for inclusion in this research are in that category. The differences among the arrays ranked at the top of the use-case charts and the arrays at the bottom are small, and, to a significant extent, they reflect differences in design points and ecosystem support. Hence, array differentiation is minimal, and the real challenge of performing a successful storage infrastructure upgrade is not designing an infrastructure upgrade that works, but designing an upgrade that optimizes agility and service-level objectives (SLOs), and minimizes total cost of ownership (TCO).

Users that don’t need the scalability and availability of high-end architectures or missing ecosystem support of the lower-ranked arrays evaluated here are encouraged to consider them, because they may have benefits in your environment and be more aggressively priced. Although optimization adds a layer of complexity to the design of the storage infrastructure upgrade, users should be aware that choosing a suboptimal solution is likely to have only a moderate impact on deployment and ownership costs for the following reasons:

  • Product advantages are usually temporary in nature — Gartner refers to this phenomenon as the "compression of product differentiation."
  • Most clients report that differences in management and monitoring tools, as well as ecosystem support between various vendors’ offerings, are not enough to change staffing requirements or SLOs.
  • Storage ownership costs, while growing as a percentage of the total IT spending, still account for less than 10% (6.5% in 2013) of most IT budgets.

Nonproduct considerations, such as vendor relationships, presales and postsales support capabilities (e.g., training, past experience and pricing), that are not strictly critical capabilities should be significant considerations in choosing solutions for the high-impact use cases explored in this research. More specifically, this includes consolidation, online transaction processing (OLTP), server virtualization and virtual desktop infrastructure (VDI), business analytics and the cloud. (For more information about the vendors covered in this research, see "Hype Cycle for Customer Analytic Applications, 2014.")



Much of the storage array space has been dividing into two general-purpose markets:

  • Hybrid array
  • Solid-state array (SSA)

Gartner appreciates the entrenched usage and appeal of simple labels and will, therefore, continue to use the terms "midrange" and "high end" until the marketplace obsoletes their usage, even though they may no longer be the most accurate descriptions of array capabilities. As a practical matter, Gartner has chosen to publish separate midrange and high-end Critical Capabilities research to enable us to provide analyses of more hybrid arrays in a potentially more client-friendly format.

Critical Capabilities Use-Case Graphics

Figure 1. Vendors’ Product Scores for the Overall Use Case

Source: Gartner (November 2014)

Figure 2. Vendors’ Product Scores for the Consolidation Use Case

Source: Gartner (November 2014)

Figure 3. Vendors’ Product Scores for the OLTP Use Case

Source: Gartner (November 2014)

Figure 4. Vendors’ Product Scores for the Server Virtualization and VDI Use Case

Source: Gartner (November 2014)

Figure 5. Vendors’ Product Scores for the Analytics Use Case

Source: Gartner (November 2014)

Figure 6. Vendors’ Product Scores for the Cloud Use Case

Source: Gartner (November 2014)


Dell Compellent

Dell’s Compellent midrange storage arrays are the vendor’s solution of choice for larger customer deployments. The SC8000, the largest array in the Compellent series, is performance- and functionally competitive. It can be integrated with the FS8600 network-attached storage (NAS) appliance to create a unified block-and-file storage system. Compellent array highlights include ease of use, excellent reporting and the ability to keep connections active, even in the presence of a controller failure that reduces its exposure mismatches between path failover and load-balancing software.

Autotiering (aka data progression) has been enhanced to move logical unit number (LUN) pages in near real time to provide a more consistent performance experience across varying workloads with the May 2014 release of Storage Center Array Software 6.5, which is available as a no-charge upgrade for customers under a current support agreement. Compellent arrays can now be configured with separate read-and-write optimized caches and has extended its autotiering feature reach to include Fluid Cache SAN (server-side cache) to further improve performance/­throughput and usable scalability.

Dell also offers specialized "Copilot" support services to reduce service calls, while improving storage management and utilization, as well as customer satisfaction. Compellent’s Perpetual Licensing software-pricing model enables customers to "grandfather" software one-time charges (OTCs), thereby lowering acquisition costs when upgrading the arrays. Although Dell can deliver block-and-file storage capabilities, a number of its established competitors are delivering more-seamless unified or multiprotocol (block and file) solutions.

Dot Hill AssuredSAN 4000/Pro 5000

Dot Hill’s AssuredSAN and AssuredSAN Pro series share a common technology base; service the entry into the middle segment of the midrange storage array market; and deliver competitive performance with software features such as thin provisioning, autotiering and remote replication. Both arrays’ reliability and microcode quality have benefited from Dot Hill’s OEM agreements with companies such as HP, Teradata and Quantum, which have sold its products under their brand names. The RealStor autotiering feature moves LUN pages in real time, using algorithms that limit overhead, while keeping the array responsive to changes in workload characteristics.

AssuredSAN has extremely competitive pricing and high customer satisfaction levels for products in its range, and its software licensing extends to the entire array — that is, it is priced by model, rather than capacity-based. Management ease of use continues to improve as the system become more autonomic and better instrumented; however, these improvements are not enough to make it a competitive advantage. Dot Hill’s efforts to build a strong technology partner ecosystem have been hampered by its limited size and R&D resources, which make supporting new APIs under its own logo a challenge.

EMC VNX Series

The latest generation of the VNX storage arrays, launched in September 2013, incorporated a hardware refresh, as well as a firmware update that improved multitasking to exploit the multicore processors within the controllers, improve performance and reduce the overhead of value-added features. This enabled the VNX to scale performance of the front-end controllers, and to fully exploit back-end solid-state drives (SSDs) and hard-disk drives (HDDs). Virtualization is not available within the VNX models; however, it is provided via VPLEX, EMC’s network-based virtualization appliance. The VNX benefits from a large ecosystem and tight integration with VMware and RecoverPoint, which provides network-based local (concurrent local copy) and remote replication (continuous remote replication).

The Unisphere management graphical user interface (GUI) for new users is still not as modern or as easy to use as those from newer array designs; however, the differences are small, once the learning curve has been scaled. Gartner client feedback verifies that the new VNX system performs well and is a significant improvement over the previous generation. However, with the ubiquitous use of SSDs in storage arrays and the ability of many new startups to create 100,000-plus input/output operations per second (IOPS) arrays, performance in the general marketplace is no longer a key differentiator in its own right, but a scalability enabler. Customer satisfaction with EMC sales and support is above average.

Fujitsu Eternus DX500 S3/DX600 S3

The Eternus DX200 S3 through DX600 S3 series are performance- and feature-competitive storage arrays. All members of the DX series use the same software, licensing and administrative GUIs and can replicate among different members of the series and with earlier DX series arrays. This use of common software across models makes upgrades among models simple and enables flexible disaster recovery deployments. Additional highlights include snapshots; thin provisioning; autotiering; multiprotocol support and quality of service (QoS); tight integration with VMware, Hyper V and backup/restore solutions, such as Symantec and CommVault; reference designs; high availability; and easy-to-manage infrastructure as a service (IaaS) environments.

Performance numbers are publicly available and independently reviewed, which adds credence to Fujitsu’s performance claims. All the technical aspects, functions and features of this storage array series are higher than average, manageability is good, and reliability is exceptionally good. In the near term, Fujitsu is developing primary data reduction and its capabilities to integrate with Openstack, but will require three to six months after it goes to general availability before it is market-validated. The company is also developing the ability to provide a cloud gateway or interface with cloud APIs.

HDS HUS 100 Series

The Hitachi Data Systems (HDS) Hitachi Unified Storage (HUS) 100 series is a unified storage array that supports block, file and object capabilities, and it is renowned for its solid hardware engineering. HUS has a symmetric, active/active controller architecture, thus enabling LUN access through either controller, with equal performance for block-access applications. In addition, the array will maintain all active host connections through the operating (surviving) controller in case of a failure. Due to the block-and-file services being provided by physically separate components (albeit tied together via a unified management GUI), a consistent snapshot cannot be created between separate block-and-file storage resources.

HUS also supports reliable, nondisruptive microcode updates that can be done at a microprocessor core level. Among the recently introduced features are the ability to spin down/spin up redundant array of independent disks (RAID) groups based on input/output (I/O) traffic, controller-based encryption and the ability to migrate file-based data to the Amazon Web Services (AWS) cloud (which requires an additional license). Although Hitachi Command Suite offers unified administration of various Hitachi storage arrays, it needs to improve its ease of use and its support for older arrays. More specifically, it needs to provide tighter integration with HUS 100 for block, file and object storage management features. Lack of tighter integration with Microsoft Hyper-V through ODX support and the absence of support for kernel-based virtual machines (KVMs) are limiting broader adoption in the midmarket.

HP 3PAR StoreServ

The HP 3PAR StoreServ series is the centerpiece of HP’s disk storage strategy, providing a common management and software architecture across the entire product line. The 3PAR architecture now extends from the entry-level, two-node 7200 to the four-node 7400 to the StoreServ 10000 series, providing midrange users with simple-to-manage and seamless growth up to 1.2PB. Ongoing hardware and software enhancements are keeping the system competitive with other SAN storage systems in availability, scalability, performance, functionality, ecosystem support and ease of use. New capabilities that have been recently released include priority optimization software, which supports the setting of performance goals at a volume level, and the six nines guarantee for four-node configurations for customers with mission-critical service contracts.

The 3PAR 74×0 systems, configured with four or more nodes, have an inherent advantage in usable availability relative to dual-controller architectures, and this advantage has been aided by recent functional enhancements, such as persistent cache and persistent ports. Performance and throughput scale linearly as nodes are added to the system and the fine-grained thin provisioning (16KB chunks) enables users to take full advantage of SSDs and aggressively overcommit storage resources.

Offsetting these strengths is a lack of an integrated NAS capability, as well as a lack of data compression and deduplication. The 3PAR systems are not yet delivering the same RPOs over asynchronous distances as traditional high-end storage systems, because 3PAR asynchronous remote copy still transmits the difference between snaps.

Huawei OceanStor 5000/6000 Series

The Huawei OceanStor 5000/6000 series are scale-out storage systems that can scale up to eight controllers and natively support block and NAS protocols, without the use of an NAS gateway. Scaling to four controllers improves performance/throughput, as well as usable availability by shrinking the relative impact of a controller failure on system performance/­throughput from a nominal 50% to a nominal 25% of normal performance. There are no signs of corner cutting on the printed circuit boards (PCBs), chassis and support equipment. Packaging and cabling layout show attention to detail and serviceability. Microcode updates, repair activities and capacity expansions are nondisruptive. Transparency and openness are provided via Storage Performance Council (SPC) benchmarks, which are used to position the OceanStor 5000/6000 series against its competitors.

A checklist of storage efficiency and data protection features includes clones, snapshots, thin provisioning, autotiering, and synchronous and asynchronous remote copy. To improve the usability of asynchronous remote copy, the OceanStor series includes consistency group support. A similar checklist of supported software includes Windows, VMware, Hyper-V, KVM and various Linux implementations, including Red Hat and SUSE. Offsetting these strengths is the relative lack of integration with many backup/restore solutions, and management tools that are improving, but are not yet a competitive advantage, as well as a limited pool of experienced Huawei storage administrators.

IBM Storwize V7000

The IBM Storwize V7000 series is a unified storage array that incorporates technologies from many IBM products, including the System Storage SAN Volume Controller (SVC), General Parallel File System (GPFS) and XIV GUI design. This reuse of technologies provides interoperability with installed SVCs; a reduction in the V7000 learning curve for existing IBM customers; mature thin provisioning, autotiering, snapshot and replication features; storage virtualization capabilities; and a good GUI experience. Physical scalability has been increased to 1,056 disks.

The ability to virtualize third-party storage arrays and replicate to other V7000s or SVCs is a high-value differentiator that is particularly useful in rationalizing existing storage infrastructures and facilitating storage infrastructures refreshes. Customers seeking to improve their physical infrastructure agility can purchase V7000 software and install it in a virtual host, thus creating a storage node with all the features of a V7000 node. Offsetting these strengths is the inability of a logical volume to span node pairs, which adds message and data "forwarding" overhead when a LUN is accessed from a nonowning node pair, as well as limited integration between the NAS gateway built on the GPFS and back-end V7000 block storage, which adds management complexity.

NEC M-Series

Although well-known in its home market of Japan as a storage vendor, NEC actively began to market its midrange storage products overseas only during the past few years. The M-Series comes in four models (M110, M310, M510 and M710) that support SAS, Fibre Channel and Internet Small Computer System Interface (iSCSI) protocols. The product has simple, all-inclusive software pricing, and includes low-power hardware components to reduce power consumption. The product has high reliability and comprehensive data services, such as autotiering, thin provisioning, snapshots and replication. It has integration with VMware vSphere API for Array Integration (VAAI) and provides vSphere API for Storage Awareness (VASA) support. Customers have indicated that the manageability needs to be improved.

The product has QoS features for minimum/maximum I/O to protect critical application performance in multitenant environments. Autotiering can make tiering decisions on a daily basis only, rather than in real time. The M-Series doesn’t support data reduction technologies, such as compression or deduplication, and the array supports only block protocols and doesn’t offer unified storage capabilities. Customers that require NAS capabilities should use the NV Series as a gateway, which is available only in Japan.

NetApp E-Series

The E-Series is an entry and midrange block storage system that has been market-validated through OEM relationships and branded sales. Its architecture provides balanced performance (IOPS) and throughput (GB/sec) which makes it suitable for use with workloads that stream large amounts of sequential data, such as high-performance computing (HPC), big data, surveillance and video-streaming applications, as well as IOPS-centric workloads, such as database, email and mixed virtualized workloads. The relatively recent additions of SSD managed as second-level cache, thin provisioning and data-at-rest encryption, coupled with aggressive pricing relative to other NetApp offerings, has increased the E-series appeal in supporting general-purpose workloads.

NetApp FAS8020/8040

The NetApp FAS 8020/8040 series is the latest iteration of the Data Ontap OS-based FAS series. The V series is no longer offered as a separate series, but its functionality is now integrated in all FAS models and enabled by the FAS FlexArray software feature, which provides useful, heterogeneous array migration and administration features in one system. The steady pace of Clustered Data Ontap enhancements is eliminating the feature deficiencies that existed between seven-mode and cluster or c-mode in earlier versions of Data Ontap. Improvements are upgrading performance, scalability and usable availability by decreasing hybrid pool disk rebuild times, as well as increasing maximum scale-out configurations. The FAS8020 series can scale out to 34PB for NAS and 96TB for SAN, and the FAS8040 can scale out to 51PB for NAS and 192TB for SAN.

With the release of Clustered Data Ontap 8.2, SnapVault, comprehensive support for VMware APIs, and SMB 3.0 and Offloaded Data Transfer (ODX) from Windows Server 2012 are supported. As the appeal of c-mode continues to improve relative to 7-mode, difficult conversions remain a significant obstacle for users to overcome. Clustered Data Ontap’s lack of a distributed file system limits the maximum performance of any file system to that of a single high-availability node pair. With the release of Clustered Data Ontap 8.3, MetroCluster is now supported, but SnapLock remains a future capability.

Nimble Storage CS-Series

Nimble customers rate three key differentiators as competitive advantages:

  • Proactive support via InfoSight — a cloud-based support offering
  • A relatively low purchase cost
  • A data layout designed to effectively leverage the different strengths of SSD and HDD

With the release of four-node cluster support, a 3Q14 technology refresh and microcode tweaks, Nimble has extended these value propositions up market. The InfoSight offering helps users optimize their configurations by suggesting configuration changes based on the real-time analysis of anonymous installed-base-wide data. Customer experiences have been positive, and Nimble still provides low-cost storage with relatively high performance by placing data on the correct storage media. Another key factor that is becoming more important is the value of community. Nimble has managed to create a successful information-sharing community of customers via NimbleConnect, which can be used to swap hints and tips.

This type of added value via transparency is quite rare, because it opens up positive and negative information sharing among users. Nimble is being adventurous and bold by taking these steps. If this can be successfully continued, and Nimble becomes a company renowned for trust and openness, then this will be a significant soft-product advantage that cannot be created or emulated overnight. Offsetting these strengths is the lack of NAS support and deduplication, which can further improve storage efficiency, as well as a limited, but growing, ecosystem.

Oracle Sun ZFS Storage Appliance

The ZS3-2 and ZS3-4 Storage Appliance provide all the features expected of a modern, unified storage array. High availability, performance/throughput, scale, tight integration with Oracle platforms and aggressive pricing are key ZFS appliance differentiators. Even though the ZS3 systems can provide block storage, Network File System (NFS) and Common Internet File System (CIFS) support, Oracle positions these arrays as NAS-based storage. Oracle Database customers gain extra performance and storage utilization benefits due to Oracle-specific protocol enhancements and the support of hybrid columnar compression, which is supported only when Oracle databases are attached to Oracle storage arrays.

The design of the system is less than 10 years old and is unconstrained by historic HDD-oriented design considerations. Due to its memory management design, which includes processor memory and separate read and write, new features can be added quickly. This is made apparent by the incorporation of detailed instrumentation, "double-bit error" checking and correction, performance, multicore scaling and capacity scaling from its original design, SSD exploitation, pooling, snapshots, compression, encryption and deduplication. These capabilities have been design objectives that were built in at its inception.

Tegile IntelliFlash

Tegile’s IntelliFlash hybrid, unified storage array is a scale-up architecture that supports block and NAS protocols, storage efficiency features, snapshots and remote replication, and the ease-of-use features expected of modern "clean sheet" designs. IntelliFlash arrays implement an active/active controller design to fully exploit controller performance and use hardware resources. This also makes array-wide, in-line compression and deduplication practical, which increases storage utilization and reduces the cost per TB. Application-aware provisioning templates improve staff productivity, while reducing the probability of misconfiguring the array.

SSDs are managed as second-level cache, which enables IntelliFlash arrays to respond quickly to changes in workloads, simplifies management and reduces the likelihood that incorrect policies will adversely affect overall array performance. Thin provisioning and separate read-and-write cache promise a more consistent performance experience by creating wide RAID stripes and matching cache capacity to application needs, respectively. Deep instrumentation and reporting tools simplify performance troubleshooting when system performance issues arise. To date, synchronous replication is unavailable; users must use asynchronous replication with IntelliFlash arrays.

Tintri VMstore

Tintri is a venture-backed startup that started shipping products in 2011. Tintri’s VMstore product is based on a dual-controller, active-passive architecture that is focused on delivering VM-aware storage. VMStore is a hybrid array that consists of SAS SSDs and 7,200-rpm SAS HDDs, where writes are compressed and deduped before being written and virtual machine (VM) I/O traffic is monitored to serve data as much possible from the SSD tier. Tintri is primarily focused on virtual workloads and allows administrators to provision VMs in a simpler manner, with added ability to set QoS at a VM level. Cloning, snapshots and asynchronous replication features also function at a VM level. The product has thin provisioning, deduplication and compression, all done in-line. Gartner inquiries reveal that the product is easy to set up and manage.

Tintri supports NFS only, and most deployments have been on the VMware platform, although it did make available support for KVM earlier this year. Support for SMB 3.0 and Hyper-V has been announced, but both are in public beta. Storage capacity per array is limited to a modest 78TB at this point, although in-line data reduction features can extend usable capacity. Although Tintri supports management of as many as 32 arrays from a single interface unit, it lacks the scale-out architecture required for easier capacity upgrades and balancing performance.

X-IO Technologies ISE Storage Systems

ISE 740 hybrid and ISE 240 storage systems are successors to the Hyper ISE and ISE. They are dual-controller arrays with the unique ability to repair most HDD failures in situ (i.e., in place). ISE SSDs are not repairable in situ, but field experience has shown this to be a nonissue, because each ISE is equipped with spare SSD capacity, and each SSD is built using eMLC flash and equipped with enough wear-leveling capacity to outlast any planned service life. ISE 240 arrays are configured with HDDs only, whereas ISE 740 is configured with a mix of SSDs and HDDs. The ability to take one of the disk’s platter surfaces offline, rather than an entire HDD, reduces rebuild times, insulates the user from field engineering mistakes and makes it practical to offer a standard, five-year warranty on both offerings.

Like their predecessors, the ISE 240 and ISE 740 are expected to earn a reputation for delivering consistent high availability and performance with minimal management attention, because they are essentially technical refreshes of their predecessors. Although they retain the core design, the newer controllers have increased processing power and are now capable of multiple protocols (both 8x8Gb/s FC and 4x40Gb/s iSCSI versions are available). This is largely attributable to the building-block approach taken by X-IO, which limits the maximum capacity of any ISE to no more than 40 SSDs and/or HDDs, and its internally developed Continuous Adaptive Data Placement (CADP) algorithm, which responds in near real time to changes in workload profiles by moving data between the SSD and HDD tiers. Both ISE models use the same management tools, have the same rack form factor (3Us) and are energy-efficient.

Offsetting these strengths is the ISE’s reliance on higher-level software — OS/hypervisor/database management system (DBMS). In addition, it lacks storage efficiency and data protection features, such as thin provisioning, snapshots and asynchronous replication. The ISE ecosystem is small, and is limited to VMware, Citrix, Hyper-V, HP-UX, AIX, Red Hat and SUSE Linux, Symantec Storage Foundation and Windows Server.


The arrays evaluated in this research include scale-up, scale-out and unified hybrid storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, and forecast growth rates and asset management strategies. Midrange arrays exhibiting scale-out characteristics can also satisfy high-end inclusion criteria when configured with four or more controllers and multiple disk shelves. Whether these differences in availability are enough to affect infrastructure design and operational procedures will vary by user environment. They will also be influenced by other considerations, such as downtime costs, lost opportunity costs and the maturity of the end-user change control procedures (e.g., hardware, software, procedures and scripting) that directly affect availability.

Product/Service Class Definition

Architectural Definitions

The following criteria classify storage array architectures by their externally visible characteristics, rather than by vendor claims or other nonproduct criteria.

Scale-Up Architectures

  • Front-end connectivity, internal bandwidth and back-end capacity scale independently of each other.
  • Logical volumes, files or objects are fragmented and spread across user-defined collections of disks, such as disk pools, disk groups or RAID sets.
  • Capacity, performance and throughput are limited by physical packaging constraints, such as the number of slots in a backplane and/or interconnect constraints.

Scale-Out Architectures

  • Capacity, performance, throughput and connectivity scale with the number of nodes in the system.
  • Logical volumes, files or objects are fragmented and spread across multiple storage nodes to protect against hardware failures and improve performance.
  • Scalability is limited by software and networking architectural constraints, not physical packaging or interconnect limitations.

Hybrid Architectures

  • Incorporate SSD, Flash, HDD, compression and/or deduplication into their basic design
  • Can be implemented as scale-up or scale-out arrays
  • Can support one or more protocols, such as block or file, and/or object protocols, including Fibre Channel, iSCSI, NFS, Server Message Block (SMB; aka CIFS), REST, FCoE and InfiniBand

Including compression and deduplication into the initial system design often results in both having a positive impact on system performance and throughput, with simplified management attributable, at least in part, to better instrumentation and more-intelligent cache management algorithms that are compression- and deduplication-aware.

Unified Architectures

  • Can simultaneously support multiple block, file, and/or object protocols, including Fibre Channel, iSCSI, NFS, SMB (aka CIFS), REST, FCoE and InfiniBand
  • May include gateway and integrated data flow implementations
  • Can be implemented as scale-up or scale-out arrays

Gateway-style implementations provision NAS and object storage protocols with storage area network (SAN)-attached block storage. Gateway implementations run separate NAS, object and SAN microcode loads on either virtualized or physical servers and, consequently, have different thin-provisioning, autotiering, snapshot and remote-copy features that are not interoperable among different protocols. By contrast, integrated implementations use the same thin-provisioning, autotiering, snapshot and remote-copy primitives independent of protocol, and can dynamically allocate controller cycles to protocols on an as-needed or prioritized basis.

Mapping the strengths and weaknesses of these different storage architectures to various use cases should begin with an overview of each architecture’s strengths and weaknesses, as well as an understanding of the workload requirements (see Table 1).

Table 1. Strengths and Weaknesses of the Storage Architectures
Strengths Weaknesses
  • Mature, reliable and cost-competitive architectures
  • Large ecosystems
  • Independently upgrade host connections and back-end capacity
  • May offer shorter recovery point objectives (RPOs) over asynchronous distances
  • Performance and internal bandwidth are fixed, and do not scale with capacity
  • Limited computing power may result in efficiency and data protection features usage negatively affecting performance
  • Electronics failures and microcode updates may be high-impact events
  • Forklift upgrade
  • IOPS and Gbps scale with capacity
  • Greater fault tolerance than scale-up architectures
  • Nondisruptive load balancing
  • High electronics costs relative to back-end storage costs
  • Efficient use of flash
  • Compression and deduplication are performance-neutral to positive
  • Consistent performance experience with minimal tuning
  • Excellent price/performance
  • Low environmental footprint
  • Relatively immature technology
  • Limited ecosystem and protocol support
  • Maximal deployment flexibility
  • Comprehensive storage-efficiency features
  • Performance may vary by protocol (block versus NAS)

Source: Gartner (November 2014)

Critical Capabilities Definition


This refers to the automation, management, monitoring, and reporting tools and programs supported by the platform. These tools and programs can include single-pane management consoles, as well as monitoring and reporting tools.

Such tools are designed to support personnel in seamlessly managing systems, monitoring system usage and efficiencies, and anticipating and correcting system alarms and fault conditions before or soon after they occur.


Reliability, availability and serviceability (RAS) is a design philosophy that consistently delivers high availability by building systems with reliable components, "derating" components to increase their mean time between failures, and designing system/clocking to tolerate marginal components.

RAS also supports hardware and microcode designs that minimize the number of critical failure modes in the system, serviceability features that enable nondisruptive microcode updates, diagnostics that minimize human errors when troubleshooting the system and nondisruptive repair activities. User-visible features can include tolerance of multiple disk and/or node failures, fault isolation techniques, built-in protection against data corruption and other techniques (such as snapshots and replication) to meet customer RPOs and recovery time objectives (RTOs).


This collective term is often used to describe IOPS, bandwidth (MB/sec) and response times (milliseconds per I/O) that are visible to attached servers. In well-designed systems, potential performance bottlenecks are encountered at the same time when supporting common workload profiles.

When comparing systems, users are reminded that performance is more of a scalability enabler than a differentiator in its own right.

Snapshot and Replication

These are data protection features that protect against data corruption problems caused by human and software errors, and technology and site failures, respectively. Snapshots can also address backup window issues and minimize the impact of backups on production workloads.


This refers to the ability of the storage system to grow capacity, as well as performance and host connectivity. The concept of usable scalability links capacity growth and system performance to SLAs and application needs.


This refers to the ability of the platform to integrate with and support third-party independent software vendor (ISV) applications, such as databases, backup/archiving products and management tools, as well as various hypervisor and desktop virtualization offerings.

Multitenancy and Security

This refers to the ability of a storage system to support a diverse variety of workloads, isolate workloads from each other, and provide user access controls and auditing capabilities that log changes to the system configuration.

Storage Efficiency

This refers to raw versus usable capacity; efficiency of data protection algorithms; and a platform’s ability to support storage efficiency technologies, such as compression, deduplication, thin provisioning and autotiering, to improve usage rates, while reducing storage acquisition costs and TCO.

Use Cases


The overall use case is a generalized usage scenario and does not represent the ways specific users will utilize or deploy technologies or services in their enterprises.


This simplifies storage management and disaster recovery, and improves economies of scale by consolidating multiple, potentially dissimilar storage into fewer, larger systems.

RAS, performance, scalability, and multitenancy and security are heavily weighted selection criteria, because the system becomes a shared resource, which magnifies the effects of outages and performance bottlenecks.


This use case is affiliated with business-critical applications (e.g., DBMSs) that need 24/7 availability and subsecond transaction responses.

Hence, the greatest emphasis is on RAS and performance features, followed by snapshots and replication, which enable rapid recovery from data corruption problems and technology or site failures. Manageability, scalability and storage efficiency are important, because they enable the storage system to scale with data growth, while staying within budget constraints.

Server Virtualization and VDI

This use case encompasses business-critical applications, back-office and batch workloads, and development.

The need to deliver I/O response times of 2 milliseconds (MS) or less to large numbers of VMs or desktops that generate cache unfriendly workloads, while providing 24/7 availability, heavily weights performance and storage efficiency, followed closely by multitenancy and security. The heavy reliance on SSDs, autotiering, QoS features that prioritize and throttle I/O, and disaster recovery solutions that are tightly integrated with virtualization software also make RAS and manageability important criteria.


This applies to storage consumed by big data applications using map/reduce technology, and packaged business intelligence (BI) applications for domain or business problems.

Performance (more specifically, bandwidth), RAS and snapshot capabilities are critical to success: RAS features to tolerate disk failures; snapshots to facilitate check-pointing, long-running applications; and bandwidth to reduce time to insight (see definition in "Hype Cycle for Analytic Applications, 2013").


This use case applies to storage arrays in private, hybrid and public cloud infrastructures, and how they apply to the cost, scale, manageability and performance requirements.

Hence, scalability, multitenancy and resiliency are important selection considerations, and are highly weighted.

Inclusion Criteria

This research evaluates the midrange, general-purpose storage systems supporting the use cases assessed in Table 2.

Table 2. Weighting for Critical Capabilities in Use Cases
Critical Capabilities Overall Consolidation OLTP Server Virtualization and VDI Analytics Cloud
Manageability 11% 10% 10% 10% 10% 15%
RAS 17% 18% 25% 12% 15% 15%
Performance 18% 15% 25% 20% 20% 10%
Snapshot and Replication 11% 10% 10% 9% 15% 10%
Scalability 13% 15% 10% 9% 15% 20%
Ecosystem 5% 5% 5% 5% 5% 5%
Multitenancy and Security 12% 15% 5% 15% 10% 15%
Storage Efficiency 13% 12% 10% 20% 10% 10%
Total 100% 100% 100% 100% 100% 100%
As of November 2014

Source: Gartner (November 2014)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

The 16 arrays selected for inclusion in this research are offered by the vendors discussed in Gartner’s "Magic Quadrant for General-Purpose Disk Arrays," which includes arrays that support block and/or file protocols. Here are the criteria that must be met for classification as a midrange storage array:

  • Single electronics failures:
    • Are not single points of failure (SPOFs)
    • Do not result in loss of data integrity or accessibility
    • Can affect more than 25% of the array’s performance/throughput
    • Can be visible to the SAN and connected application servers
  • Microcode updates:
    • Can be disruptive
    • Can affect more than 25% of the array’s performance/throughput
  • Repair activities and capacity upgrades:
    • Can be disruptive
  • Have an average selling price of more than $24,999

The criteria for qualification as a high-end array are more severe than those for midrange arrays. For this reason, arrays that satisfy the high-end criteria also satisfy the midrange criteria, but are included in the high-end Critical Capabilities research, rather than here.

For the reader’s convenience, high-end array criteria are shown below:

  • Single electronics failures are:
    • Invisible to the SAN and connected application servers
    • Affect less than 25% of the array’s performance/throughput
  • Microcode updates are:
    • Nondisruptive and can be nondisruptively backed out
    • Affect less than 25% of the array’s performance/throughput
  • Repair activities and capacity upgrades are:
    • Invisible to the SAN and connected application servers
    • Affect less than 50% of the array’s performance/throughput
  • Support dynamic load balancing
  • Support local replication and remote replication
  • Typical high-end disk array ASPs cost more than $250,000
Table 3. Product/Service Rating on Critical Capabilities
Product or Service Ratings Dell Compellent Dot Hill AssuredSAN 4000/Pro 5000 EMC VNX Series Fujitsu Eternus DX500 S3/DX600 S3 HDS HUS 100 Series HP 3PAR StoreServ Huawei OceanStor 5000/6000 Series IBM Storwize V7000 NEC M-Series NetApp E-Series NetApp FAS8020/8040 Nimble Storage CS-Series Oracle Sun ZFS Storage Appliance Tegile IntelliFlash Tintri VMstore X-IO Technologies ISE Storage Systems
Manageability 3.8 3.3 3.5 3.3 3.3 4.0 3.2 3.3 3.0 3.3 4.0 4.5 3.5 3.8 4.2 3.7
RAS 3.5 3.7 3.5 4.0 4.2 4.2 3.7 3.3 3.8 3.5 3.8 3.7 3.3 3.5 3.7 4.8
Performance 3.7 3.5 3.5 3.7 3.2 3.8 3.7 3.5 3.2 3.5 3.7 3.7 3.7 3.8 3.7 3.8
Snapshot and Replication 3.5 3.0 3.5 3.7 3.7 3.7 3.7 3.8 3.3 3.3 3.8 3.3 3.7 3.5 3.7 1.8
Scalability 3.3 2.2 3.7 3.3 3.3 3.2 4.0 3.3 3.7 3.0 4.0 3.5 4.0 3.0 2.8 3.0
Ecosystem 3.7 3.3 4.2 3.8 4.0 4.0 3.5 3.7 3.2 3.2 4.2 3.3 2.7 3.3 2.7 3.2
Multitenancy and Security 3.2 2.7 3.7 3.2 3.7 4.2 2.8 3.7 2.8 3.3 4.2 3.2 3.3 3.3 3.3 2.7
Storage Efficiency 4.0 3.3 3.8 3.3 3.2 3.7 3.2 4.0 2.8 3.2 4.0 3.7 3.7 4.3 4.0 2.3
As of November 2014

Source: Gartner (November 2014)

Table 4 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 4. Product Score in Use Cases
Use Cases Dell Compellent Dot Hill AssuredSAN 4000/Pro 5000 EMC VNX Series Fujitsu Eternus DX500 S3/DX600 S3 HDS HUS 100 Series HP 3PAR StoreServ Huawei OceanStor 5000/6000 Series IBM Storwize V7000 NEC M-Series NetApp E-Series NetApp FAS8020/8040 Nimble Storage CS-Series Oracle Sun ZFS Storage Appliance Tegile IntelliFlash Tintri VMstore X-IO Technologies ISE Storage Systems
Overall 3.58 3.16 3.62 3.55 3.55 3.85 3.50 3.55 3.26 3.31 3.92 3.64 3.55 3.59 3.58 3.28
Consolidation 3.56 3.12 3.63 3.54 3.57 3.85 3.49 3.54 3.27 3.30 3.94 3.62 3.54 3.56 3.54 3.28
OLTP 3.61 3.28 3.60 3.64 3.59 3.87 3.58 3.51 3.33 3.36 3.88 3.68 3.54 3.62 3.62 3.53
Server Virtualization and VDI 3.62 3.17 3.64 3.51 3.50 3.86 3.43 3.61 3.17 3.31 3.94 3.63 3.55 3.67 3.62 3.16
Analytics 3.57 3.13 3.62 3.56 3.54 3.82 3.55 3.55 3.28 3.31 3.91 3.62 3.58 3.57 3.56 3.23
Cloud 3.54 3.04 3.64 3.50 3.55 3.82 3.49 3.52 3.27 3.28 3.96 3.65 3.56 3.52 3.52 3.23
As of November 2014

Source: Gartner (November 2014)

To determine an overall score for each product/service in the use cases, multiply the ratings in Table 3 by the weightings shown in Table 2.

Gartner Critical Capabilities for General-Purpose High-End Storage Arrays – 20 November 2014

Critical Capabilities for General-Purpose High-End Storage Arrays

Comparison of 12 high-end storage arrays
This is a Press Release edited by on 2015.02.23

Critical Capabilities for General-Purpose, High-End Storage Arrays (20 November 2014 ID:G00263130) is a report frim analysts Valdis Filks, Stanley Zaffos, Roger W. Cox, Gartner, Inc.

Here, we assess 12 high-end storage arrays across high-impact use cases and quantify products against the critical capabilities of interest to infrastructure and operations. When choosing storage products, I&O leaders should look beyond technical attributes, incumbency and vendor/product reputation.


Key Findings

  • With the inclusion of SSDs in arrays, performance is no longer a differentiator in its own right, but a scalability enabler that improves operational and financial efficiency by facilitating storage consolidation.
  • Product differentiation is created primarily by differences in architecture, software functionality, data flow, support and microcode quality, rather than components and packaging.
  • Clustered, scale-out, and federated storage architectures and products can achieve levels of scale, performance, reliability, serviceability and availability comparable to traditional, scale-up high-end arrays.
  • The feature sets of high-end storage arrays adapt slowly, and the older systems are incapable of offering data reduction, virtualization and unified protocol support.


  • Move beyond technical attributes to include vendor service and support capabilities, as well as acquisition and ownership costs, when making your high-end storage array buying decisions.
  • Don’t always use the ingrained, dominant considerations of incumbency, vendor and product reputations when choosing high-end storage solutions.
  • Vary the ratios of SSDs, SAS and SATA hard-disk drives in the storage array, and limit maximum configurations based on system performance to ensure that SLAs are met during the planned service life of the system.
  • Select disk arrays based on the weighting and criteria created by your IT department to meet your organizational or business objectives, rather than choosing those with the most features or highest overall scores.

What You Need to Know
Superior nondisruptive serviceability and data protection characterize high-end arrays. They are the visible metrics that differentiate high-end array models from other arrays, although the gap is closing. The software architectures used in many high-end storage arrays can trace their lineage back 20 years or more.

Although this maturity delivers HA and broad ecosystem support, it is also becoming a hindrance with respect to flexibility, adaptability and delays to the introduction of new features, compared with newer designs. Administrative and management interfaces are often more complicated when using arrays involving older software designs, no matter how much the internal structures are hidden or abstracted. The ability of older systems to provide unified storage protocols, data reduction and detailed performance instrumentation is also limited, because the original software was not designed with these capabilities as design objectives.

Gartner expects that, within the next four years, arrays using legacy software will need major re-engineering to remain competitive against newer systems that achieve high-end status, as well as hybrid storage solutions that use solid-state technologies to improve performance, storage efficiency and availability. In this research, the aggregated scores among the arrays are minimal. Therefore, clients are advised to look at the individual capabilities that are important to them, rather than the overall score.

Because array differentiation has decreased, the real challenge of performing a successful storage infrastructure upgrade is not designing an infrastructure upgrade that works, but designing one that optimizes agility and minimizes TCO.

Another practical consideration is that choosing a suboptimal solution
is likely to have only a moderate impact on deployment and TCO
for the following reasons:

  • Product advantages are usually short-lived and temporary. Gartner refers to this phenomenon as the ‘compression of product differentiation.’
  • Most clients report that differences in management and monitoring tools, as well as ecosystem support among various vendors’ offerings, are not enough to change staffing requirements.
  • Storage TCO, although growing, still accounts for less than 10% (6.5% in 2013) of most IT budgets.


The arrays evaluated in this research include scale-up, scale-out, hybrid and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, and forecast growth rates and asset management strategies.

Midrange arrays with scale-out characteristics can satisfy the HA criteria when configured with four or more controllers and multiple disk shelves. Whether these differences in availability are enough to affect infrastructure design and operational procedures will vary by user environment, and will also be influenced by other considerations, such as host system/capacity scaling, downtime costs, lost opportunity costs and the maturity of the end-user change control procedures (e.g., hardware, software, procedures and scripting), which directly affect availability.

Critical Capabilities Use-Case Graphics
The weighted capabilities scores for all use cases are displayed as components of the overall score (see Figures 1 through 6).

Figure 1. Vendors’ Product Scores for the Overall Use Case

Figure 2. Vendors’ Product Scores for the Consolidation Use Case

Figure 3. Vendors’ Product Scores for the OLTP Use Case

Figure 4. Vendors’ Product Scores for the Server Virtualization and VDI Use Case

Figure 5. Vendors’ Product Scores for the Analytics Use Case

Figure 6. Vendors’ Product Scores for the cloud Use Case

(Source: Gartner, November 2014)


DataDirect Networks SFA12K
The SFA12KX, the newest member of the SFA12K family, increases SFA12K performance/¬throughput via a hardware refresh and through software improvements. Like other members of the SFA12K family, it remains a dual-controller array that, with the exception of an in-storage processing capability, prioritizes scalability, performance/throughput and availability over value-added functionality, such as local and remote replication, thin provisioning and autotiering. These priorities align better with the needs of the high-end, HPC market than with general-purpose IT environments. Further enhancing the appeal of the SFA12KX in large environments is dense packaging: 84 HDDs/4U or 5PB/rack, and GridScaler and ExaScaler gateways that support parallel file systems, based on IBM’s GPFS or the open-source Lustre parallel file system.

The combination of high bandwidth and high areal densities has made the SFA12K a popular array in the HPC, cloud, surveillance and media markets that prioritize automatic block alignment and bandwidth over IO/s. The SFA12K’s high areal density also makes it an attractive repository for big data and inactive data, particularly as a backup target for backup solutions doing their own compression and/or deduplication. Offsetting these strengths are limited ecosystem support beyond parallel file systems and backup/restore products; lack of vSphere API for Array Integration (VAAI) support, which limits its appeal for use as VMware storage; zero bit detection, which limits its appeal with applications such as Exchange and Oracle Database; and QoS and security features that could limit its appeal in multitenancy environments.

The maturity of the VMAX 10K, 20K and 40K hardware, combined with the Enginuity software and wide ecosystem support, provides proven reliability and stability. However, the need for backward compatibility has complicated the development of new functions, such as data reduction. The VMAX3, which has recently become available, has not yet had time to be market-validated, because it only became available on 26 September 2014. Even with new controllers, promised Hypermax software updates and a new IB internal interconnect, mainframe support is not available, nor is the little-used FCoE protocol. Nevertheless, with new functions, such as in-built VPLEX, recover point replication, virtual thin provisioning and more processing power, customers should move quickly to the VMAX3, because it has the potential to develop further.

The new VMAX 100K, 200K and 400K arrays still lack independent benchmark results, which, in some cases, leads users to delay deploying a new feature into production environments until the feature’s performance has been fully profiled, and its impact on native performance is fully understood. The lack of independent benchmark results has also led to misunderstandings regarding the configuration of back-end SSDs and HDDs into RAID groups, which have required users to add capacity to enable the use of more-expensive 3D+1P RAID groups to achieve needed performance levels, rather than larger, more-economical 7D+1P RAID groups.

EMC’s expansion into software-defined storage (SDS; aka ViPR), network-based replication (aka RecoverPoint) and network-based virtualization (aka VPLEX) suggests that new VMAX users should evaluate the use of these products, in addition to VMAX-based features, when creating their storage infrastructure and operational visions.

Fujitsu Eternus DX8700 S2
The DX8700 S2 series is a mature, high-end array with a reputation for robust engineering and reliability, with redundant RAID groups spanning enclosures and redundant controller failover features. Within the high-end segment, Fujitsu offers simple unlimited software licensing on a per-controller basis; therefore, customers do not need to spend more as they increase the capacity of the arrays. The DX8700 S2 series was updated with a new level of software to improve performance and improved QoS, which not only manages latency and bandwidth, but also integrates with the DX8700 Automated Storage Tiering to move data to the required storage tier to meet QoS targets. It is a scale-out array, providing up to eight controllers.

The DX8700 S2 has offered massive array of idle disks (MAID) or disk spin-down for years. Even though this feature has been implemented successfully without any reported problems, it has not been adopted, nor has it gained popular market acceptance. The same Eternus SF management software is used across the entire DX product line, from the entry level to the high end. This simplifies manageability, migration and replication among Fujitsu storage arrays. Customer feedback is positive concerning the performance, reliability, support and serviceability of the DX8700 S2, and Gartner clients report that the DX8700 S2 RAID rebuild times are faster than comparable systems. The management interface is geared toward storage experts, but is simplified in the Eternus SF V16, thereby reducing training costs and improving storage administrator productivity. To enable workflow integration with SDS platforms, Fujitsu is working closely with the OpenStack project.

The HDS Hitachi Unified Storage (HUS) VM is an entry-level version of the Virtual Storage Platform (VSP) series. Similar to its larger VSP siblings, it is built around Hitachi’s cross-bar switches, has the same functionality as the VSP, can replicate to HUS VM or VSP systems using TrueCopy or Hitachi Universal Replicator (HUR), and uses the same management tools as the VSP. Because it shares APIs with the VSP, it has the same ecosystem support; however, it does not scale to the same storage capacity levels as the HDS VSP G1000. Similarly, it does not provide data reduction features. Hardware reliability and microcode quality are good; this increases the appeal of its Universal Volume Manager (UVM), which enables the HUS VM to virtualize third-party storage systems.

HDS offers performance transparency with its arrays, with SPC-1 performance and throughput benchmark results available. Client feedback indicates that the use of thin provisioning generally improves performance and that autotiering has little to no impact on array performance. Snapshots have a measurably negative, but entirely acceptable, impact on performance and throughput. Offsetting these strengths are the lack of native iSCSI and 10GbE support, which is particularly useful for remote replication, as well as relatively slow integration with server virtualization, database, shareware and backup offerings. Integration with the Hitachi NAS platform adds iSCSI, CIFS and NFS protocol support for users that need more than just FC support.

The VSP has built its market appeal on reliability, quality microcode and solid performance, as well as its ability to virtualize third-party storage systems using UVM. The latest VSP G1000 was launched in April 2014, with more capacity and performance/throughput achieved via faster controllers and improved data flows. Configuration flexibility has been improved by a repackaging of hardware that enables controllers to be packaged in a separate rack. VSP packaging also supports the addition of capacity-only nodes that can be separated from the controllers. It provides a larger variety of features, such as a unified storage, heterogeneous storage virtualization and content management via integration with HCAP. Data compression and reduction are not supported. Performance needs dictate and independently configure each redundant node’s front- and back-end ports, cache, and back-end capacity. However, accelerated flash can be used to accelerate performance in hybrid configurations. Additional feature highlights include thin provisioning, autotiering, volume-cloning and space-efficient snapshots, synchronous and asynchronous replication, and three-site replication topologies.

The VSP asynchronous replication (aka HUR) is built around the concept of journal files stored on disk, which makes HUR tolerant of communication line failures, allows users to trade off bandwidth availability against RPOs and reduces the demands placed on cache. It also offers a data flow that enables the remote VSP to pull writes to protected volumes on the DR site, rather than having the production-side VSP push these writes to the DR site. Pulling writes, rather than pushing them, reduces the impact of HUR on the VSP systems and reduces bandwidth requirements, which lowers costs. Offsetting these strengths are the lack of native iSCSI and 10GbE support, as well as relatively slow integration with server virtualization, database, shareware and backup offerings.

HP 3PAR StoreServ 10000
The 3PAR StoreServ 10000 is HP’s preferred, go-to, high-end storage system for open-system infrastructures that require the highest levels of performance and resiliency. Scalable from two to eight controller-nodes, the 3PAR StoreServ 10000 requires a minimum of four controller-nodes to satisfy Gartner’s high-end, general-purpose storage system definition. Competitive with small and midsize, traditional, frame-based, high-end storage arrays, particularly with regard to storage efficiency features and ease of use, HP continues to make material R&D investments to enhance 3PAR StoreServ 10000 availability, performance, capacity scalability and security capabilities. Configuring 3PAR StoreServ storage arrays with four or more nodes limits the effects of high-impact electronics failures to no more than 25% of the system’s performance and throughput. The impact of electronic failures is further reduced by 3PAR’s Persistent Cache and Persistent Port failover features, which enable the caches in surviving nodes to stay in write-in mode and active host connections to remain online.

Resiliency features include three-site replication topologies, as well as Peer Persistence, which enables transparent failover and failback between two StoreServ 10000 systems located within metropolitan distances. However, offsetting the benefit of these functions are the relatively long RPOs that result from 3PAR’s asynchronous remote copy actually sending the difference between two snaps to faraway DR sites; microcode updates that can be time-consuming, because the time required is proportional to the number of nodes in the system; and a relatively large footprint caused by the use of four-disk magazines, instead of more-dense packaging schemes.

Sourced from Hitachi Ltd. under joint technology and OEM agreements, the HP XP7 is the next incremental evolution of the high-end, frame-based XP-Series that HP has been selling since 1999. Engineered to be deployed in support of applications that require the highest levels of resiliency and performance, the HP XP7 features increased capacity scalability and performance over its predecessor, the HP XP P9500, while leveraging the broad array of proven HP-XP-series data management software. Beyond expected capacity and performance improvements, the new Active-Active HA and Active-Active data mobility functions that elevate storage system and data center availability to higher levels, as well as providing nondisruptive, transparent application mobility among hosts servers at the same or different sites are two notable enhancements. The HP XP7 shares a common technology base with the Hitachi/HDS VSP G1000, and HP differentiates the XP7 in the areas of broader integration and testing with the full HP portfolio ecosystem and the availability of Metro Cluster for HP Unix, as well as by restricting the ability to replicate between XP7 and HDS VSPs.

Positioned in HP’s traditional storage portfolio, the primary mission of the XP7 is to serve as an upgrade platform to the XP-Series installed base, as well as to address opportunities involving IBM mainframe and storage for HP Nonstop infrastructures. Since HP acquired 3PAR, XP-Series revenue continues to decline annually, as HP places more go-to-market weight behind the 3PAR StoreServ 10000 offering.

Huawei OceanStor 18000
The OceanStor 18000 storage array supports both scale-up and scale-out capabilities. Data flows are built around Huawei’s Smart Matrix switch, which interconnects as many as 16 controllers, each configured with its own host connections and cache, with back-end storage directly connected to each engine. Hardware build quality is good, and shows attention to detail in packaging and cabling. The feature set includes storage-efficiency features, such as thin provisioning and autotiering, snapshots, synchronous and asynchronous replication, QoS that nondisruptively rebalances workloads to optimize resource utilization, and the ability to virtualize a limited number of external storage arrays.

Software is grouped into four bundles and is priced on capacity, except for path failover and load-balancing software, which is priced by the number of attached hosts to encourage widespread usage. The compatibility support matrix includes Windows, various Unix and Linux implementations, VMware (including VAAI and vCenter Site Recovery Manager support) and Hyper-V. Offsetting these strengths are relatively limited integration with various backup/restore products, configuration and management tools that are more technology- than ease-of-use-oriented, a lack of documentation and storage administrators familiar with Huawei, and a support organization that is largely untested outside mainland China.

IBM DS8870
The DS8870 is a scale-up, two-node controller architecture that is based and dependent on IBM’s Power server business. Because IBM owns the z/OS architecture, IBM has inherent cross-s,ing, product integration and time-to-market advantages supporting new z/OS features, relative to its competitors. Snapshot and replication capabilities are robust, extensive and relatively efficient, as shown by features such as FlashCopy; synchronous, asynchronous three-site replication; and consistency groups that can span arrays. The latest significant DS8870 updates include Easy Tier improvements, as well as a High Performance Flash Enclosure, which eliminates earlier, SSD-related architectural inefficiencies and oosts array performance. Even with theaddition of the Flash Enclosure, the DS8870 is no longer IBM’s high-performance system, and data reduction features are not available unless extra SVC devices are purchased in addition to the DS8870.

Overall, the DS8870 is a competitive offering. Ease-of-use improvements have been achieved by taking the XIV management GUI and implementing it on the DS8870. However, customer reports are that the new GUI still needs a more detailed administrative approach, and is not yet suited to high-level management, as provided by the XIV icon-based GUI. Due to the dual-controller design, major software updates can disable one of the controllers for as long as an hour. These updates need to be planned, because they can reduce the availability and performance of the system by as much as 50% during the upgrade process. With muted traction in VMware and Microsoft infrastructures, IBM positions the DS8870 as its primary enterprise storage platform to support z/OS and AIX infrastructures.

The current XIV is in its third generation. The freedom from legacy dependencies is apparent from its modern, easy-to-use, icon-based operational interface, and a scale-out distributed processing and RAID protection scheme. Good performance and the XIV management interface are winning deals for IBM. This generation enhances performance with the introduction of SSD and a faster IB interconnect among the XIV nodes. The advantages of the XIV are simple administration and inclusive software licenses, which make buying and upgrading the XIV simple, without hidden or additional storage software license charges. The mirror RAID implementation creates a raw versus usable capacity, which is not as efficient as traditional RAID-5/6 designs; therefore, the scalability only reaches 325TB. However, together with inclusive software licensing, the XIV usable capacity is priced accordingly, so that the price per TB is competitive in the market.

A new Hyper-Scale feature enables IBM to federate a number of XIV platforms to create a petabyte+ scale infrastructure under the Hyper-Scale Manager to enable the administration of several XIV systems as one. Positioned as IBM’s primary high-end storage platform for VMware, Hyper-V and cloud infrastructure deployments, IBM has released several new and incremental XIV enhancements, foremost of which are three-site mirroring, multitenancy and VMware vCloud Suite integration.

NetApp FAS8000
The high-end FAS series model numbers were changed from FAS6000 to FAS8000. The upgrade included faster controllers and storage virtualization built into the system and enabled via a software license. Because each FAS8000 HA node pair is a scale-up, dual-controller array, to qualify for inclusion in this Critical Capabilities research requires that the FAS8000 series must be configured with at least four FAS8000 nodes managed by Clustered Data Ontap. This supports a maximum of eight nodes for deployment with SAN protocols and up to 24 nodes with NAS protocols. Depending on drive capacity, Clustered Data Ontap can support a maximum raw capacity of 2.6PB to 23.0PB in a SAN infrastructure, and 7.8PB to 69.1PB in a NAS infrastructure.

The FAS system is no longer the flagship high-performance, low-latency storage array for NetApp customers that value performance over all other criteria. They can now choose NetApp products such as the FlashRay. Seamless scalability, nondisruptive upgrades, robust data service software, storage-efficiency capabilities, flash-enhanced performance, unified block-and-file multiprotocol support, multitenant support, ease of use and validated integration with leading ISVs are key attributes of an FAS8000 configured with Clustered Data Ontap.

Oracle FS1-2
The hybrid FS1-2 series replaces the Oracle Pillar Axiom storage arrays and is the newest array family in this research. Even though the new system has fewer SSD and HDD slots, scalability in terms of capacity is increased by approximately 30% to a total of 2.9PB, which includes up to 912TB of SSD. The design remains a scale-out architecture with the ability to cluster eight FS1-2 pairs together. The FS1 has an inclusive software licensing model, which makes upgrades simpler from a licensing perspective. The software features included within this model are QoS Plus, automated tiered storage, thin provisioning, support for up to 64 physical domains (multitenancy) and multiple block-and-file protocol support. However, if replication is required, Oracle MaxRep engine is a chargeable optional extra.

The MaxRep product provides synchronous and asynchronous replication, consistency groups and multihop replication topologies. It can be used to replicate and, therefore, migrate older Axiom arrays to newer FS1-2 arrays. Positioned to provide best-of-breed performance in an Oracle infrastructure, the FS1-2 enables Hybrid Columnar Compression (HCC) to optimize Oracle Database performance, as well as engineered integration with Oracle’s VM and its broad library of data management software. However, the FS1 has yet to fully embrace integration with competing hypervisors from VMware and Microsoft.

Critical Capabilities Rating
Each product or service that meets our inclusion criteria has been evaluated on several critical capabilities on a scale from 1.0 (lowest ranking) to 5.0 (highest ranking). Rankings (see Table 3) are not adjusted to account for differences in various target market segments. For example, a system targeting the SMB market is less costly and less scalable than a system targeting the enterprise market, and would rank lower on scalability than the larger array, despite the SMB prospect not needing the extra scalability.

Table 3. Product/Service Rating on Critical Capabilities

(Source: Gartner, November 2014)

Table 4 shows the product/service scores for each use case. The scores, which are generated by multiplying the use case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 4: Product Score on Use Cases

(Source: Gartner, November 2014)

To determine an overall score for each product/service in the use cases, multiply the ratings in Table 3 by the weightings shown in Table 2.

3 501

Magic Quadrant for Solid-State Arrays – 28 August 2014

Figure 1. Magic Quadrant for Solid-State Arrays

28 August 2014 ID:G00260420

Analyst(s): Valdis Filks, Joseph Unsworth, Arun Chandrasekaran


Solid-state arrays provide performance levels an order of magnitude faster than disk-based storage arrays at competitive prices per GB, enabled by in-line data reduction and lower-cost NAND SSD. This Magic Quadrant will help IT leaders better understand SSA vendors’ positioning.


Market Definition/Description

This Magic Quadrant covers SSA vendors that offer dedicated SSA product lines positioned and marketed with specific model numbers, which cannot be used as, upgraded or converted to general-purpose or hybrid storage arrays. SSA is a new subcategory of the broader external controller-based (ECB) storage market.

Considering the potential disruptive nature of SSAs on the general-purpose ECB disk storage market, Gartner has elected to report only on vendors that qualify as an SSA. We do not consider solid-state drive (SSD)-only general-purpose disk array configurations in this research. To meet these inclusion criteria, SSA vendors must have a dedicated model and name, and the product cannot be configured with hard-disk drives (HDDs) at any time. These systems typically (but not always) include an OS and data management software optimized for solid-state technology.

Magic Quadrant

Source: Gartner (August 2014)

A vendor’s position on the Magic Quadrant should not be equated with its product’s attractiveness or suitability for every client’s requirements. If the solutions better fit your needs, have the appropriate support capabilities and are attractively priced, then it is perfectly acceptable to acquire solutions from vendors that are not in the Leaders quadrant.

Vendor Strengths and Cautions


Cisco entered the SSA market through the acquisition of Whiptail in 2013. Whiptail had launched its product family in 2012. Cisco has incorporated the product family and re-engineered into the Cisco UCS Invicta Series. The portfolio consists of the UCS Invicta appliance and UCS Invicta Scaling System products. The UCS Invicta is a 2U array, while the scaling system can scale up to six nodes. Through the acquisition of Whiptail, Cisco is aiming to deliver a tightly coupled, high-performance flash-memory-based technology to complement UCS fabric-based infrastructure. Whiptail customers will continue to be supported by Cisco. However, the new product is undergoing a significant refresh that standardizes on Cisco hardware designs and administration software to better integrate with UCS compute and management tools.


  • The Invicta product line has a modular and extensible scale-out architecture, which provides implementation flexibility to customers in consolidating and converging workloads.
  • The Cisco UCS Director integration for the Invicta product family will enable Cisco customers to gain better operational simplicity.
  • Whiptail customers will benefit from Cisco’s deep technology partnerships with key independent software vendors (ISVs) that will result in more validated designs and reference guides.


  • Product delays and changing position statements are expected with the Invicta product, because it is going through a transition and conflicts with Cisco alliances, such as the EMC VCE and NetApp FlexPod.
  • Cisco currently has a relatively small professional services and support team dedicated to SSAs, with a limited presence outside the U.S.
  • Cisco has been slow and reticent in providing guidance on how these products will integrate and be managed within the UCS fabric postacquisition.


EMC has two SSD-based products in the SSA market: (1) the XtremIO scale-out technology, which EMC acquired in May 2012; and (2) the VNX-F array, which is based on the traditional general-purpose VNX unified storage array and exploits the proven VNX HDD-based hardware controllers and software. Both offerings are positioned and sold as dedicated SSAs. EMC has a large and relatively loyal installed base for the XtremIO products. EMC has a significant and broad, but overlapping, SSD product portfolio. The portfolio will be enhanced by EMC’s acquisition of DSSD and its technology, which will initially be positioned as an extreme performance networked appliance. EMC has been a vocal visionary concerning SSD for more than a decade, but its market-leading messaging has outpaced some of its product introductions. Compared with competitor SSAs, the XtremIO product was late to market and became generally available only in November 2013. With a concrete offering, XtremIO, together with VNX-F, has enabled EMC to grab the No. 4 market share position in the SSA segment for 2013. EMC has gained traction for the XtremIO product, and has continued its momentum through 1H14 via concerted sales efforts and competitive pricing.


  • EMC has a highly successful global sales force, exceptional marketing, and highly rated support and maintenance capability.
  • Large and loyal EMC customers have been provided with early products and attractive competitive introductory pricing. These customers can expect beneficial purchase terms.
  • XtremIO offers inclusive software pricing, and customers do not have to budget, track or purchase extra licenses when capacity is upgraded.


  • EMC is offering XtremIO at competitive prices to its installed base, but transparency of information (such as list prices, discount levels and independent performance benchmarks) is unavailable. To avoid hidden future costs, customers should fix all XtremIO purchases and upgrades at these competitive introductory prices.
  • VNX-F includes data reduction in the base system price. Unlike XtremIO and most competitors’ offerings, VNX-F still uses a traditional licensing structure, which requires customers to pay additional support and license charges for other upgrades and extra features (such as data protection suite).
  • While XtremIO’s product integration with ViPR has been announced, it is not currently available. Given the product overlap between the XtremIO and VNX-F products, operational and administration complexity is an issue.


HP is one of the late entrants into the SSA market, with availability of its HP 3PAR StoreServ 7450 model in June 2013. While HP is relatively new to the SSA market with its own product, it had an OEM partnership with Violin Memory, which ended in late 2011, in favor of HP’s organic approach. The 3PAR storage architecture is sufficiently flexible to exploit SSD media, complete with purpose-built SSA features. Compared with EMC and IBM, HP has not aggressively marketed, sold and generally mined its installed base. HP has almost entirely leveraged its 3PAR hardware architecture and management platform, but has made some important enhancements centered on efficiently maximizing the resident SSD technology. This affords HP a cost-effective approach, as well as robust reliability that can be supported with solid warranty terms, including a five-year SSD warranty and six 9s (99.9999%) of availability guarantees for four-node deployments.


  • HP has leveraged its hardware and storage software design, which are sufficiently modern and flexible enough to accommodate the nuances of solid-state technology and to implement new data reduction services.
  • HP 3PAR StoreServ 7450 offers a proven compatibility matrix for a broad variety of application workloads, cost-effective thin provisioning, and a familiar interface for customers, as well as a scale-out architecture.
  • HP has an extensive channel presence, global sales ability and a substantial customer base that is complemented with worldwide support and service capabilities.


  • Customers need to request more evidence to demonstrate the ROI to distinguish its product functionality and capability from other SSA and general-purpose arrays.
  • Despite the familiarity gained by HP’s leveraging its storage architecture, its media reporting abilities need further refinement.
  • Some client references have had limited visibility into HP’s SSA product strategy, and HP and its partners have limited mind share in the market.


Huawei was an early entrant in the SSA market, with the launch of OceanStor Dorado in mid-2011, when it was a joint venture with Symantec. Since then, Huawei has acquired Symantec’s stake, announced successive generations, maintained the investment and expanded the product line. Huawei has an aggressive sales approach, offering steep discounts off the list price for qualified enterprise customers. Its maintenance and support pricing (as a percentage of capital expenditure [capex]) tends to be lower than many competitors’ pricing and is backed by a large postsales support team concentrated in Asia/Pacific. To further improve the transparency and competitiveness of its SSA products, Huawei has been aggressive in submitting performance details to public performance benchmarks (such as the Storage Performance Council SPC-1).


  • Huawei is a large, profitable enterprise storage vendor, offering customers a well-rounded storage portfolio in emerging markets.
  • Huawei has committed significant R&D dedicated to SSAs, which has resulted in the design and development of its application-specific integrated circuit (ASIC)-based SSD controllers, SSDs and software capabilities.
  • The Dorado product family delivers competitive pricing and performance, and supports a large ecosystem of ISVs, including commonplace hypervisors and VMware APIs.


  • Huawei’s reseller network, professional services and support capabilities in the U.S. tend to be weak, due to brand perception and execution challenges.
  • Pricing is still on an a la carte basis, charging for individual data service modules, while most other vendors are gravitating toward unified, all-inclusive base pricing.
  • Huawei’s channel partner ecosystem continues to be weak, which presents challenges for enterprise customers looking for detailed workload profiling, multisite implementations and best-practice guidance.


IBM acquired Texas Memory Systems (TMS) in September 2012, and subsequently announced in April 2013 that it would invest $1 billion into all aspects of flash (SSD) storage technology. IBM has leveraged its storage technology, specifically Storwize compression software and the IBM SAN Volume Controller (SVC) layer, which has been placed on top of the FlashSystem array to provide high-level data services. TMS had a successful track record of producing low-latency storage using DRAM for over 30 years, and using flash-based storage for nearly 10 years. The IBM-engineered FlashSystem products are available as a stand-alone storage enclosure — the FlashSystem 840 — which has limited software features. In March 2014, IBM made available the FlashSystem V840, which is the storage enclosure combined with the FlashSystem control enclosure, to provide data services such as compression, mirroring, thin provisioning and replication. This usage of the SVC for the FlashSystem control enclosure follows a pattern within IBM’s storage division, where the SVC is placed on top of many IBM products (such as the DS8000, Storwize V7000 and XIV storage arrays) to provide a common and interoperable platform abstracting the diverse products beneath it, an approach that has internal cost and reuse advantages. However, with such a diverse number of devices, the complexity of managing compatibility, fixes, and software and hardware regression testing between an exponentially increasing number of software and hardware platforms increases dependencies among product lines. Basic storage controller features — such as redundant array of independent disks (RAID), hot code load, controller failover, port failover, caching and administration software — are duplicated in the storage enclosure (FlashSystem 840) and the control enclosure (SVC). Compared with competitors, IBM charges separately for higher-level features such as compression.


  • Within the SSA market, the TMS platform has one of the longest proven track records with respect to array performance.
  • There is a quick and short learning curve for IBM Storwize V7000 and SVC customers, because the same SVC-based management interface is used on many other key IBM storage product lines.
  • IBM has successfully exploited its system company advantage and has cross-sold the FlashSystem into its customer base through direct and indirect channel incentives and bundling discounts with SVC.


  • Compared with the FlashSystem V840, the FlashSystem 840 has limited data services and will require IBM or non-IBM virtualization products for data services.
  • The FlashSystem 840 is dependent on the SVC product line to provide data services, such as compression, thin provisioning, snapshots and mirroring, among other features, for additional costs.
  • Clients starting with the FlashSystem 840 that later decide they require extra storage features will need to purchase extra SVC-based hardware. This increases the operating expenditure (opex) considerations (such as wiring, power, cooling and physical rack space requirements) compared with the FlashSystem 840 by itself.


Kaminario was founded in 2008 and is headquartered in Newton, Massachusetts, but product development is concentrated in Israel. Kaminario is one of the more resilient SSA vendors, and has been in the market with a shipping product for more than three years. It is on its fifth-generation product. The Kaminario K2 product has undergone several reincarnations of its system features in hardware and, most recently, data management software, as it has migrated from its initial DRAM appliance approach in 2011. Kaminario performs well across many public benchmarks, which is appealing given its ability to scale out and scale up. With only recent successful marketing efforts, many companies are unaware of Kaminario, because it lacks market awareness and mind share compared with the established storage vendors and some of the new startups.


  • Kaminario has an advantageous scale-up/scale-out architecture that utilizes flexible storage efficiency and resiliency technologies to maximize cost structure and SSD longevity.
  • The vendor has been providing customers with a guarantee program for an average of $2 per GB effective capacity and a seven-year unconditional SSD endurance warranty, which has helped promote customer confidence in Kaminario.
  • Kaminario offers strong R&D and engineering support, with key technologies protected by 34 patents as of June 2014.


  • The vendor’s presence is concentrated in the U.S. and Europe for sales and support coverage.
  • As a relatively small organization, Kaminario has limited marketing ability to gain mind share, which is important in order to expand its sales channel bandwidth and long-term viability.
  • Like most startups, Kaminario is not currently profitable, and will require another round of funding to sustain itself.


NetApp announced the first EF array model in February 2013, and updated it with the EF550 in November 2013, helping continue its product momentum. Compared with smaller SSA startups, NetApp was a late entrant to the SSA market. However, NetApp was able to reuse existing products and technology, as the EF Series is based on the mature E Series hardware and the SANtricity platform acquired from the acquisition of LSI’s Engenio business. This has led to an intricately managed positioning and sales challenge between the EF and FAS products. The EF Series is targeted at workloads that need high performance. Unlike the FAS Series, the EF Series is primarily sold through a direct sales force. NetApp’s customers and prospects can elect to deploy the EF Series, choose the recently productized All-Flash FAS offerings, or wait for the launch of FlashRay in late 2014. Although FlashRay has been delayed thus far, NetApp claims it will be a dedicated SSA product built from the ground up and optimized for SSD technology.


  • NetApp has a deep understanding of SSDs. Its diverse portfolio of SSD offerings features good workload analysis tools that can profile applications and match them to the right products, helping customers rightsize their environments from several perspectives: reliability, availability, serviceability, manageability and performance.
  • With the EF Series, NetApp has changed its pricing structure to an all-inclusive one, which simplifies license management during upgrades and long-term budgeting.
  • The EF Series provides support for a wide variety of high-speed interconnect protocols, including FC, Internet SCSI (iSCSI), SAS and InfiniBand.


  • With the scheduled launch of FlashRay, which has been in development for more than two years, the EF Series needs to compete for product development, marketing and sales dollars within NetApp, which raises questions about the long-term viability of the EF Series product line.
  • The EF Series uses more reliable, but more expensive, enterprise-grade SSD (single-level cell [SLC] and enterprise multilevel cell [eMLC]) and, given the lack of any data reduction capabilities, it may not be cost-competitive for diverse workloads.
  • The EF Series has a complex graphical user interface (GUI), compared with newer designs from competitors, and Ontap/FAS customers will require new skills to operate and administer the EF Series.

Nimbus Data

Nimbus Data was founded in 2006, and is headquartered in San Francisco, California. The vendor has taken a vertically integrated approach in terms of hardware and software to deliver dense, cost-effective arrays that appeal to a variety of customers and application workloads. Many of the vendor’s initial deployments came from a concentrated customer base that included several hyperscale customers. Nimbus Data doubled its revenue in 2013 year over year, but has suffered from high employee churn and skepticism among some companies in the market. It continues to deliver public case studies and references to improve customer perception. Ultimately, it will need to be more transparent about its business operations, and to scale its business to capably meet future customer needs for sales and support across key geographies.


  • Nimbus Data has an aggressive pricing strategy predicated on advanced SSD memory and density enabled by a vertically integrated hardware approach.
  • Its offering has broad workload applicability, with multiprotocol support, all-inclusive software pricing and a comprehensive data service feature set appealing to a diverse customer set, ranging from Web scale to conventional data center environments.
  • Nimbus Data claims to have a profitable business since 2013, and, with no external funding, has been able to navigate its direction without influence from investors.


  • The vendor has a thin (streamlined) management team, with limited scalability, succession resources and responsibility-sharing abilities, and executive decision making driven from a highly centralized top-down approach, which is problematic for long-term viability.
  • Sales and product support and services are limited, and provided from a relatively small organization with selective geographic penetration.
  • Nimbus Data’s business model is based on large, performance-oriented accounts with a limited ability to grow into a diversified customer base in terms of revenue share, creating viability concerns due to customer concentration.

Pure Storage

Pure Storage was founded in 2009 with a business plan to create a new, dedicated SSA and to grow organically, rather than to achieve quick wins or the largest market share. The vendor’s business model was not to be first to market, but to be a more financially stable and sustainable long-term business. This business model has been successful to date, and Pure Storage has managed to gain significant investments, thereby achieving financial stability. It signed a cross-licensing deal with IBM to protect itself with key storage system intellectual property (IP), and has a go-to-market strategy stimulated by an aggressive channel partner program. Pure Storage has a relatively mature platform — the FA-400 Series — and a proven data reduction implementation. A transparent attitude toward pricing and guaranteed efficiency has achieved significant mind share and attention in the SSA market, promoted via creative, but poignant, marketing campaigns. Similarly, innovative and competitive inclusive software licensing and inclusive controller upgrade programs (offered when customers pay full support and maintenance costs) have proven to be a fresh and welcome approach that challenges and disrupts the established incumbent SSA and general-purpose disk array vendors’ license schemes and forklift product replacement cycles.


  • Pure Storage has a solid financial base supported by funding totaling more than $470 million to date. Its success and growth, combined with a unique culture, help attract world-class talent, with head count exceeding 680 as of August 2014 — all of which helps negate near-term viability concerns.
  • The vendor has an efficient product cost structure based on low-cost consumer MLC (cMLC) PC SSDs.
  • Innovative marketing, purchasing, trial and product renewal programs create a product that is simple to buy, install and manage.


  • Data reduction is not selectable, and there is relatively low usable capacity if the workload and data is not suitable for data reduction.
  • The vendor can be outperformed in the highest input/output (I/O) and low-latency environments.
  • The vendor takes a traditional scale-up approach, with limited raw capacity scalability and a large physical footprint.


The single-controller Skyera skyHawk platform became available in April 2014. Because Skyera is not using existing enterprise SSDs or components and has had challenges delivering products to market on time, it still does not have a standard high-availability dual-controller array. However, the vendor has been a thought leader, challenging the established incumbent disk array hegemony, and is an innovative visionary in the industry, creating a purpose-built system designed from the SSD chip level upward by exploiting the most cost-effective, advanced SSD memory technology. This unique hardware approach enables Skyera to drive down SSA costs to levels that compete with general-purpose disk arrays. Data reduction is in the form of compression, which further improves storage utilization and the usable cost per GB. The next-generation skyEagle system will have more high-availability data center hardware architecture, such as dual-power supplies and controllers. Skyera is a probable acquisition target, even though it has considerable strategic investment, including public investors Dell and Western Digital (WD), among others.


  • Skyera has a low-cost-oriented value proposition that debunks the premise of expensive SSAs.
  • The vendor provides a solid value proposition, with good remote support and high precompression density per form factor, with 57TB raw capacity, which is 44TB formatted, but before data reduction, per 1 rack unit at half depth.
  • Skyera offers an unconditional warranty for system replacement.


  • The skyHawk can only be used in high-availability environments if a storage virtualization layer is used to provide high-level abstraction features to mirror data and to provide failover between two separate skyHawk arrays.
  • Data management software is limited in the skyHawk. Most software features are included in the base price, except for compression, which is separately licensed.
  • Companies looking for long-term viability should realize that Skyera has a limited customer base and limited product revenue, and is actively pursuing another round of funding since its last round in February 2013.


SolidFire is a privately held, venture-capital-funded company that makes scale-out SSAs. SolidFire is an emerging company that is not yet profitable and with a product that has been in general availability for less than two years. Its SF Series product line became available in November 2012. SolidFire’s initial focus was on service providers offering high-performance infrastructure as a service (IaaS) and, while this segment still continues to be a key focus, recent product launches and go-to-market initiatives have widened the focus toward enterprise buyers. SolidFire is highly differentiated from its competitors through its scale-out capabilities, rich software features and ability to guarantee storage performance. Management of the platform is built around the Web-scale principles of automation, quality of service (QoS) and API-based access. The product has close integration with cloud management platforms, such as OpenStack, CloudStack and VMware vCloud suite. Pricing is simple, all-inclusive and appeals to traditional enterprise users.


  • SolidFire’s ability to deliver high scalability in capacity and performance makes it an attractive platform for running next-generation cloud and big data workloads.
  • SolidFire puts a high degree of emphasis on keeping costs low through usage of cMLC-based PC SSDs and no-charge data reduction features, such as compression and deduplication that are always turned on, and operating in-line.
  • The QoS and multitenancy allow customers to run multiple workloads in isolation with guaranteed performance, eliminating disruption or degradation from unwieldy workloads.


  • The initial acquisition costs are high, even for SolidFire’s low-end platforms, given that there needs to be at least four nodes in a cluster.
  • SolidFire has limited field services and support personnel outside the U.S. and the U.K.
  • Given that a high portion of its revenue is generated from a direct sales force, enterprise customers need to be cautious regarding the availability of reseller partners for implementation and support.

Violin Memory

Violin Memory is a pioneer in the SSA industry. Founded in 2005, it has earned revenue since 2010. The vendor’s foundation has been through its hardware approach, predicated on SSD-chip-level system expertise founded on aggregating removable Peripheral Component Interconnect Express (PCIe) dual in-line memory modules (DIMMs). This approach enables Violin Memory to offer a high-performance, resilient system featuring one of the most competitive pricing structures on the market, due to its strong relationship with SSD memory manufacturer Toshiba. However, Violin Memory had financial challenges since its initial public offering (IPO) in September 2013, when its disappointing sales and financial outlook forced the company to take drastic action. A fresh, new management team has been in place since early 2014. It has refocused on its core customers by paring back its sales force and pursuing a channel approach targeted at key geographies. Violin Memory has been trying to exploit software from its acquisition of GridIron Systems in 2013, in an effort to complement its hardware with a portfolio of native data management software features that debuted in late June 2013. Violin Memory divested its PCIe SSD business for $23 million in June 2014.


  • Violin Memory sources directly from SSD memory suppliers and its lead investor Toshiba to exploit hardware in terms of performance, density and price that translates to aggressive, final system prices for customers.
  • The 6000 series is available via several partnerships, such as reseller relationships with Dell, Fujitsu, Toshiba and NEC, and at the application level with Microsoft to deliver an optimized Windows Flash Array with software features tuned to Microsoft database, Server Message Block (SMB) and Network File System (NFS) environments.
  • The new management team is executing on a clear vision that eliminates distractions and proactively addresses customer, partner and investor needs.


  • Violin Memory recently debuted a more cohesive data management software service strategy with its Concerto release, which appears promising, but relatively untested.
  • Violin Memory has capable U.S. sales, support and services, but limited direct international sales. It will have to realign with channel partners to expand, which will take time and could complicate efforts for small or midsize businesses (SMBs).
  • Violin Memory’s financial stability, primarily the rate that it is burning cash, is a reason for caution. The vendor is likely to be an acquisition target if its profitability does not improve in 2015.

Vendors Added and Dropped

We review and adjust our inclusion criteria for Magic Quadrants and MarketScopes as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant or MarketScope may change over time. A vendor’s appearance in a Magic Quadrant or MarketScope one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.


This is a new Magic Quadrant


This is a new Magic Quadrant

Inclusion and Exclusion Criteria

To be included in the Magic Quadrant for SSA, a vendor must:

  • Offer a self-contained, SSD-only system that has a dedicated model name and model number (see Note 1).
  • Have an SSD-only system. It must be initially sold with 100% SSD and cannot be reconfigured, expanded or upgraded at any point with any form of HDDs within expansion trays via any vendor’s special upgrade, specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.
  • Sell its product as a stand-alone product, without the requirement to bundle it with other vendors’ storage products in order to be implemented in production.
  • Provide at least five references that Gartner can interview. There must be at least one client reference from Asia/Pacific, EMEA and North America, or the two geographies within which the vendor has a presence.
  • Provide an enterprise-class support and maintenance service, offering 24/7 customer support (including phone support). This can be provided via other service organizations or channel partners.
  • Have established notable market presence as demonstrated by the amount of PBs sold, number of clients or significant revenue.

The product and a service capability must be available in at least two of the following markets — Asia/Pacific, EMEA and North America — via direct or channel sales. Availability does not include hybrid (SSD, HDD) storage arrays.

The SSAs evaluated in this research include scale-up, scale-out and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, and forecast growth rates and asset management strategies.

While the SSA Magic Quadrant represents vendors whose dedicated systems meet our inclusion criteria, ultimately, it is the application workload that governs which solutions you should consider, regardless of any criteria.

Other vendors and products were considered for the Magic Quadrant but did not meet the inclusion criteria, despite offering SSD-only configuration options to existing products. These vendors and/or specific products may warrant investigation based on your application workload needs for their SSD-only offerings:

  • American Megatrends (AMI)
  • Dell Compellent Storage Solutions
  • Fujitsu Eternus DX200F
  • Fusion-io ION (acquired by SanDisk)
  • Hitachi Unified Storage (HUS) VM
  • IBM DS8000
  • NetApp FAS
  • Oracle ZFS
  • Tegile T-Series

Evaluation Criteria

Ability to Execute

We analyze the vendor’s capabilities across broad business functions. Vendors that have expanded their products across a wider range of use cases and applications, improved their service and support capabilities, and focused on improving mission-critical applications will be more highly rated in the Magic Quadrant analysis. Ability to Execute reflects the market conditions and, to a large degree, it is our analysis and interpretation of what we hear from the market. Our focus is assessing how a vendor participates in the day-to-day activities of the market.

Product or Service evaluates the capabilities of the products or solutions offered to the market. Key items to be considered for the SSA market are how well the products and/or services address enterprise use case needs, the critical capabilities of the product (see “Critical Capabilities for Solid State Arrays”) and breadth of product and/or solutions.

Overall Viability includes an assessment of the organization’s financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue to invest in the product, offer the product and advance the state of the art in the organization’s product portfolio.

Sales Execution/Pricing looks at the vendor’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support and the overall effectiveness of the sales channel.

Market Responsiveness/Record focuses on the vendor’s capability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the provider’s history of responsiveness.

Marketing Execution directly leads to unaided awareness (i.e., Gartner end users mentioned the vendor without being prompted) and a vendor’s ability to be considered by the marketplace. Vendor references, Gartner’s inquiries and end-user client search analytics results are factored in as a demonstration of vendor awareness and interest.

Customer Experience looks at a vendor’s capability to deal with postsales issues. Because of the specialized nature of the cloud storage market and the mission-critical nature of many of the storage environments, vendors are expected to escalate and respond to issues in a timely fashion with dedicated and specialized resources, and to have relevant detailed expertise. Another consideration is a vendor’s ability to deal with increasing global demands. Additional support tools and programs are indications of a maturing approach to the market.

Operations considers the ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.

Table 1. Ability to Execute Evaluation Criteria
Evaluation Criteria Weighting
Product or Service High
Overall Viability High
Sales Execution/Pricing Medium
Market Responsiveness/Record High
Marketing Execution Medium
Customer Experience Medium
Operations Medium

Source: Gartner (August 2014)

Completeness of Vision

Completeness of Vision distills a vendor’s view of the future, the direction of the market and the vendor’s role in shaping that market. We expect the vendor’s vision to be compatible with our view of the market’s evolution. A vendor’s vision of the evolution of the data center and the expanding role of SSAs are important criteria. In contrast with how we measure Ability to Execute criteria, the rating for Completeness of Vision is based on direct vendor interactions, and on our analysis of the vendor’s view of the future.

Market Understanding looks at the technology provider’s capability to understand buyers’ needs, and to translate those needs into an evolving road map of products and services. Vendors show the highest degree of vision, listen to and understand buyers’ wants and needs, and can shape or enhance those wants and needs with their added vision.

Marketing Strategy relates to what vendor solution message is described, how that message is communicated, what vehicles are used to effectively deliver it, and how well the buying public resonates with and remembers the message. In a market where many vendors and/or products can sound the same, or sometimes not even be known, message differentiation and overall awareness are vital.

Sales Strategy considers the strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.

Offering (Product) Strategy looks at a vendor’s product road map and architecture, which we map against our view of enterprise requirements. We expect product direction to focus on catering to emerging enterprise use cases for solid state arrays.

Business Model assesses a vendor’s approach to the market. Does the vendor have an approach that enables it to scale the elements of its business (for example, development, sales/distribution and manufacturing) cost-effectively, from startup to maturity? Does the vendor understand how to leverage key assets to grow profitably? Can it gain additional revenue by charging separately for optional, high-value features? Other key attributes in this market are reflected in how the vendor uses partnerships to increase sales. The ability to build strong partnerships with a broad range of technology partners and associated system integrators demonstrates leadership.

Vertical/Industry Strategy measures the vendor’s strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.

Innovation measures a vendor’s ability to move the market into new solution areas, and to define and deliver new technologies. In the SSA market, innovation is key to meeting rapidly expanding requirements and to keeping ahead of new (and often more-agile) competitors.

Geographic Strategy measures the vendor’s ability to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.

Table 2. Completeness of Vision Evaluation Criteria
Evaluation Criteria Weighting
Market Understanding High
Marketing Strategy Medium
Sales Strategy Medium
Offering (Product) Strategy High
Business Model High
Vertical/Industry Strategy Low
Innovation High
Geographic Strategy Medium

Source: Gartner (August 2014)

Quadrant Descriptions


Vendors in the Leaders quadrant have the highest scores for their Ability to Execute and Completeness of Vision. A vendor in the Leaders quadrant has the market share, credibility, and marketing and sales capabilities needed to drive the acceptance of new technologies. These vendors demonstrate a clear understanding of market needs; they are innovators and thought leaders; and they have well-articulated plans that customers and prospects can use when designing their storage infrastructures and strategies. In addition, they have a presence in the five major geographical regions, consistent financial performance and broad platform support.


Vendors in the Challengers quadrant participates in the SSA market and executes well enough to be a serious threat to vendors in the Leaders quadrant. They have strong products, as well as sufficient credible market position and resources to sustain continued growth. Financial viability is not an issue for vendors in the Challengers quadrant, but they lack the size and influence of vendors in the Leaders quadrant.


A vendor in the Visionaries quadrant delivers innovative products that address operationally or financially important end-user problems at a broad scale, but has not demonstrated the ability to capture market share or sustainable profitability. Visionary vendors are frequently privately held companies and acquisition targets for larger, established companies. The likelihood of acquisition often reduces the risks associated with installing their systems.

Niche Players

Vendors in the Niche Players quadrant often excel by focusing on specific market or vertical segments that are generally underpenetrated by the larger SSA vendors. This quadrant may also include vendors that are ramping up their SSA efforts, or larger vendors having difficulty in developing and executing upon their vision.


This Magic Quadrant represents vendors that sell into the enterprise end-user market with specific branded SSAs. An insatiable demand for storage also demands a more capable high-performance tier that can deliver low-latency storage more reliably in order to create tangible benefits. As high-performance storage demand explodes, it will require even more storage administration, emphasizing the perpetual need for storage efficiency, resiliency and manageability to counter this trend.

Market Overview

There has been a growing demand for SSAs to meet the low-latency performance requirements of enterprise- and Web-scale applications. Over the last decade, CPU performance has improved by an order of magnitude, while the performance of HDD within general-purpose storage arrays stagnated, an increasingly accentuating divergence. SSAs have corrected this imbalance by temporarily satiating the demand for storage performance. This has led to the quick and successful adoption of SSA, evidenced by the fact that the total revenue for SSA in 2013 was $667 million, with a huge year-over-year growth of 182%.

The SSA market witnessed a considerable uptake in adoption in 2013, fueled by significant and continued investments in startups and with established vendors opting to acquire emerging vendors, although some are still pursuing an organic approach to growth. Large incumbent system vendors, such as EMC, HP, IBM and NetApp, have been focused on cross selling their new SSA products to their established customers, thereby quickly obtaining large market shares. However, once this captive segment has been mined, a vendor’s ability to grow market share in the long term will be predicated on overall product ability, sales bandwidth and execution as it competes outside its installed base. Nearly half of the vendors in this Magic Quadrant have pursued a vertically integrated approach based upon direct procurement of SSD memory, with the remaining vendors choosing to outsource the SSD storage and procure functionality from external suppliers to focus on an SSD-optimized data management software strategy.

Between 2010 and 2012, most customers were interested primarily in high-performance and low-latency SSAs. Given the lack of available data management features, customers tolerated the feature shortcomings in favor of raw performance. As initial storage performance issues were capably addressed, customers wanted to address multiple application workloads that required a rich data management software portfolio consisting not only of storage efficiency and resiliency technologies purpose-built for SSAs, but also the underlying SSD memory technology. During 2013, we witnessed the advent of comprehensive data management software features, such as deduplication, compression, thin provisioning, snapshots and replication technologies that, when specifically tailored to SSD, can provide compelling benefits, particularly in application workloads that see favorable data reduction ratios. This trend of innovative and comprehensive data management software on the more mature SSA platforms has continued into 2014, and has started to permeate at the application level, which will drive the industry in 2015 and beyond. It is through the synergy of cost-effective hardware and purpose-built software that the industry will see further consolidation in order to reach maturation.

As this market matures and SSAs gain feature equivalency with general-purpose arrays, we expect decreasing differentiation between general-purpose storage arrays and SSAs. Vendors of general-purpose arrays product lines and server SSD cards have created specific array models full of SSD media. These models are tactical implementations that enable the vendors to market directly into the SSA segment, while they create longer-term strategies or create purpose-built SSAs. If these vendors maintain their investments in these general-purpose array SSD variations over a longer period and they prove not to be a viable tactical stopgap, they may need to create specific SSAs. The SSA market is distinct. It has matured from the early solid-state appliance offerings, because the data services provided are equivalent and, in certain cases (such as data reduction and administration), offer richer and improved features than general-purpose storage arrays. SSAs have matured to levels competitive with general-purpose storage arrays in all but scale. The average usable capacity of the SSA purchased is approximately 38TB. The preferred connection protocol is Fibre Channel: 63% of all SSAs attach to servers use Fibre Channel, and 33% use the iSCSI protocol. NFS and Common Internet File System (CIFS) attach are, therefore, rarely used. Online transaction processing (OLTP), analytics and server virtualization are the top three workloads that customers consider for SSAs, with virtual desktop infrastructure (VDI) being the fourth most popular workload. While a majority of SSA deployments are for a single workload, Gartner is seeing interest in converging multiple workloads on the same product, which, in many cases, are being enabled by features such as QoS.

Critical Capabilities for Solid-State Arrays

29 August 2014 ID:G00260421

Analyst(s): Valdis Filks, Joseph Unsworth, Arun Chandrasekaran


Solid-state arrays are capable of delivering significant improvements in performance, although high cost perceptions persist. This report analyzes 13 SSAs across six high-impact use cases and quantifies product attractiveness against seven critical capabilities that are important to IT leaders.



Key Findings

  • The most common use cases for solid-state arrays (SSAs) are online transaction processing (OLTP), analytics and virtual desktop infrastructure (VDI), with performance being an inordinately important factor in the selection.
  • SSAs are replacing high-end enterprise arrays configured for performance and are increasingly being used in business and mission-critical environments.
  • Although most organizations today have deployed SSAs in a silo for specific workloads, Gartner inquiries reveal a keen interest to harness them for multiple workloads, given the maturing data services.
  • The price gap between general-purpose storage arrays and SSAs is narrowing, particularly with products that exploit consumer-grade NAND flash/solid-state drives (SSDs) with in-line data reduction features.


  • Mitigate product immaturity concerns by choosing vendors that offer guarantees and unconditional warranties around availability, durability, performance and usable capacity.
  • Choose products that can deliver consistent performance across varying workloads, which are important in your current and future environment.
  • Use data reduction simulation tools to verify data reduction suitability for your data and workload.
  • Implement established SSAs in business and mission-critical environments because reliability has exceeded expectations.

What You Need to Know

Solid-state arrays are rapidly gaining adoption due to significant performance advantages that customers can gain. The products from late entrants are rapidly catching up, with features on par with general-purpose arrays and established SSAs. The SSA market is divided between several pure-play emerging vendors that have built up hardware and software capabilities optimized for SSD, while larger incumbent vendors are moving aggressively to stay relevant in this important market segment, through acquisitions and/or organic product development. Many vendors have chosen to take existing proven general-purpose disk array software operating systems and array hardware designs, and adapt these to fully dedicated SSAs, which are then marketed and sold as dedicated SSA products. While this is a quick and economical method of getting an SSA to market, many existing general-purpose storage arrays were not designed for or do not lend themselves to be used as SSA because they were tuned for HDDs. In some cases, a bifurcating product line, where some array models are tuned, maintained and patched for SSDs and others for HDD, can become a software development and patch consistency nightmare, leading to restricted product problem determination and development for customers.

SSAs are used to consolidate performance, with most customers preferring to use block protocols with these storage systems. The total cost of ownership (TCO) and storage utilization of an SSA is becoming cost-competitive with general-purpose storage arrays, especially when the workloads are suitable for data reduction and when the data reduction ratio is approximately 5-to-1. While performance benchmarks are important, many customers are moving beyond that to place a high degree of emphasis on features that can enhance SSD endurance and manageability, and deliver high availability on par with general-purpose systems and data reduction features that can reduce TCO. The performance gap between the leading SSA products is narrowing, which means customers can more closely consider data services, ecosystem, services and support as important factors during evaluation. Though the cost of SSDs is falling, only through in-line data reduction features can customers fully maximize the value of their SSD tier. In addition, data reduction features can extend the longevity of the SSD tier by reducing the volume of writes and erasures. Media reliability has not been an issue due to features such as wear leveling and better error correction methods, which are also making it possible to use consumer-grade NAND flash and PC SSDs in solid-state arrays, to lower acquisition costs.

Customers should recognize that this is a highly dynamic market with a great number of product features and upgrades announced in 2014. You should choose solutions that do not require extensive storage infrastructure changes and redesign, and that are backed by strong services and support with an ability to deliver product enhancements and new features.

Within five years, the expectation of consistent sub-500-μs storage I/O response times will become commonplace and will become a performance differentiator. Today, however, any SSA vendor that can improve general-purpose HDD disk array performance to submillisecond levels or by an order of magnitude has a valuable product differentiator. When customer and service-level expectations of 150 μs become the norm, a 50 μs to 100 μs performance difference will be significant criteria in purchasing decisions, but today, this level of difference is not important. Software, price, support and data reduction are more important than 0.1 millisecond (ms) performance differences. Conversely, most SSA vendors still emphasize and sell “speeds and feeds,” whereas features such as data reduction are more important.

Product rating evaluation criteria considerations include:

  • Product features considered for inclusion must have been in general availability by 30 July 2014 to be considered in the vendors’ product scores.
  • Ratings in this Critical Capabilities report should not be compared with other research documents, because the ratings are relative to the products analyzed in this report, not ratings in other documents.
  • Scoring for the seven critical capabilities and six use cases was derived from analyst research throughout the year and recent independent Gartner research on the SSA market. Each vendor responded in detail to a comprehensive, primary research questionnaire administered by the authors. Extensive follow-up interviews were conducted with all participating vendors, and reference checks were conducted with end users. This provided the objective process for considering the vendors’ suitability for the use cases.


Critical Capabilities Use-Case Graphics

Figure 1. Vendors’ Product Scores for Overall Use Case

Source: Gartner (August 2014)

Figure 2. Vendors’ Product Scores for Online Transaction Processing Use Case

Source: Gartner (August 2014)

Figure 3. Vendors’ Product Scores for Server Virtualization Use Case

Source: Gartner (August 2014)

Figure 4. Vendor Product Scores for Virtual Desktop Infrastructure Use Case

Source: Gartner (August 2014)

Figure 5. Vendors’ Product Scores for High-Performance Computing Use Case

Source: Gartner (August 2014)

Figure 6. Vendors’ Product Scores for Analytics Use Case

Source: Gartner (August 2014)


Cisco UCS Invicta Series

Cisco entered the solid-state array market through the acquisition of Whiptail in 2013. Cisco has completed the process of porting the Whiptail OS onto the Unified Computing System (UCS) hardware, with plans to integrate the administration of the array with UCS Manager. The product is also being rebranded as the Cisco UCS Invicta Series, replacing the previous Accela and Invicta product names. The product uses enterprise multilevel cell (eMLC) NAND SSD with in-line deduplication and thin provisioning. The product has support for FC and iSCSI with asynchronous replication capabilities, and recently announced snapshot support.

Hypervisor support is limited to VMware, and integration with other enterprise independent software vendors (ISVs) remains limited at this point. Public performance benchmarks aren’t widely available, leaving customers to use reference checks to verify claims of consistent performance. Microcode updates are disruptive, and the product currently lacks native encryption support. Customers purchasing the UCS and VCE integrated systems have the option of EMC or Cisco SSA and, therefore, will be able to leverage competing options to obtain the best purchase price.


The XtremIO product was designed from inception to efficiently use external SSDs and currently uses robust, but more costly, enterprise SAS eMLC SSDs to deliver sustained and consistent performance. It has a purpose-built, performance-optimized scale-out architecture that leverages content-based addressing to achieve inherent balance, always-on and in-line data reduction, optimal resource utilization in its storage layout, a flash-optimized data protection scheme called XDP, and a very modern, simple-to-use graphical user interface (GUI). XtremIO arrays presently scale out to six X-Bricks, with each X-Brick having dual controllers providing a total of 120TB of physical flash, measured before the space-saving benefits of thin provisioning, data reduction and space-efficient writable snapshots. The addition of nodes currently requires a system outage, and upgrades to some version 3 features such as compression will also require a disruptive upgrade, which EMC will mitigate with professional services to avoid interruptions to hosts and applications. Compared with similar EMC scale-out product architectures, such as the Isilon scale-out array, which stores data across nodes and therefore can sustain a node outage, in an XtremIO cluster, blocks of data cannot be accessed if a single X-Brick has a complete outage, such as a simultaneous loss of both controllers, because data is stored only once on a single X-Brick.


The lower-capacity 46TB VNX-F is based on the existing VNX unified general-purpose disk array. It has postprocess deduplication and a relatively more complex management interface due to the requirement to support the legacy (or its inherited) VNX architecture. We do not expect the VNX-F SSA and general-purpose VNX software and hardware architectures to diverge. As a result of the requirement for the software and hardware to support two different models/forks, new software features may take longer to become available due to the increased complexity of supporting two separate product lines and storage formats that use the same software stack and may require different fixes and firmware upgrades.

Different operational and administration GUIs between XtremIO and VNX-F and other products may require extra products to be purchased to provide a single management interface. Satisfaction guarantees and data service pricing for XtremIO data services are inclusive and simple, but customers need to buy data protection suite as separate package with the VNX-F.

HP 3PAR StoreServ 7450

The 7450 is based on the HP StoreServ general-purpose array architecture, which leverages HP’s proprietary application-specific integrated circuit (ASIC) and additional DRAM capacity. The design uses a memory-mapping look-up implementation similar to an operating system’s virtual to physical RAM translation, which is media-independent and lends itself well to virtual memory-mapping media such as SSDs. This is a particularly compelling attribute because of its efficient usage of the external SSDs by reducing the amount of overprovisioning required, as well as enabling a lean cost structure by leveraging consumer-grade MLC SSD. Another added benefit is maximizing SSD endurance as the granular, systemwide wear leveling extends the durability of the less reliable consumer MLC (cMLC) SSD media. Due to the media-independent memory-mapping 3PAR storage software architecture, which is implemented on SSD and general-purpose array models, we do not expect a software bifurcation. However, with more model variations, there will be longer testing and qualification periods.

The system scales to larger capacities than most competitors, with a maximum raw capacity of 460TB when configured with 1.9TB SSDs. The array does not currently have full in-line deduplication and compression, but does exploit existing 3PAR zero block bit pattern matching and thin provisioning to improve storage efficiency. The array performs well in shared environments due to its mature multitenancy and quality of service (QoS) features. However, no file protocols are supported. Pricing of all data services is tied to the general-purpose array 3PAR model and is based on host and capacity, making it complex compared to new entrants. The 3PAR 7450 platform has an extensive and proven compatibility matrix and reliability track record that is supported with a six 9s (99.9999%) high-availability guarantee during the first 12 months.

Huawei OceanStor Series

Huawei entered the solid-state array market in 2011 with the launch of the entry-level array, OceanStor Dorado2100. Since then, Huawei launched the OceanStor Dorado5100, the second generation of the 2100, and most recently, the OceanStor 18800F. Huawei currently uses self-developed SSDs that use single-level cell (SLC) and eMLC modules. The product supports thin provisioning, copy-on-write snapshots, and asynchronous and synchronous replication services. Customers can obtain competitively priced SSA from Huawei because it has an aggressive sales approach, offering steep discounts off the list price for qualified enterprise customers. Its maintenance and support pricing (as a percent of capital expenditure [capex]) also tends to be lower than many others, which is backed by a large postsales support team that is highly concentrated in Asia.

Huawei’s R&D efforts in the past have been focused more on the hardware layer and only recently have software features started getting the attention that they deserve. Huawei’s products currently lack data reduction features such as deduplication and compression. Firmware upgrades are disruptive, and native encryption support is lacking in the array. However, Huawei provides performance transparency via SPC benchmarks and it is one of the few unified block and file SSAs.

IBM FlashSystem V840

The FlashSystem family consists of the older 700 series and the newer 800 series SSA, and all models only support block protocols. The FlashSystem 840 has more connection options with QDR InfiniBand in addition to FC, FCoE and iSCSI connections. Alternatively, the FlashSystem V840, which adds in IBM’s SVC, can scale to 320TB due to the internal FlashSystem Control Enclosure, and it inherits the broad SVC compatibility matrix but only supports FC protocols. Similarly, the V840 has richer data services in terms of QoS, compression, thin provisioning, snapshots and replication features, whereas the 840 lacks these. The 840 is designed to be a simple performance-oriented point product, whereas the V840 is for more general-purpose deployments. Both, however, lack deduplication. Additional features are provided by IBM’s SVC product, FlashSystem Control Enclosure, which has a simple-to-learn-and-operate administrative GUI.

The addition of control enclosures with the V840 increases the number of separate products and components that come with the SVC layer, which reduces performance when using real-time compression compared to the 840. The SVC control enclosure layer increases product complexity, as it has separate software levels that need to be maintained and tested across IBM storage product families. In V840-based configurations, customers need to administer and operate two devices: (1) the control enclosure; and (2) the storage enclosure, which also increases system complexity, product upgrades and problem determination.

Kaminario K2

The K2 SSA is in its fifth generation and is a testament to the product’s resiliency and flexible architectural approach, belying its original heritage as a DRAM appliance. Considering its genesis, K2 prides itself on its strong performance, which has been publicly scrutinized and has been verified via the Storage Performance Council, specifically the SPC-1 and SPC-2 benchmarks. In its latest generation, Kaminario also has added scale-up to its scale-out architecture, along with a more comprehensive suite of data management services. The array only supports FC and iSCSI block protocols and cMLC SATA SSDs, along with storage efficiency features that enable 30TB per rack unit that can extend both up and then out.

Its latest generation now features in-line compression, and selectable in-line deduplication and thin provisioning features. Pricing follows the customer-oriented approach of all options being inclusive in the base array price, and boasts a guaranteed effective capacity average price of $2 per GB, lending credence to storage efficiency claims. The product is reinforced with a seven-year unconditional warranty on SSD endurance, which negates customers’ concerns with SSD reliability. However, replication is not yet available, and QoS performance features are limited, albeit the system does have sufficient reporting capabilities. Resiliency features and a scale-out design with very good nondisruptive software and system firmware upgrade features indicate flexibility and scalability as a fundamental requirement for its storage efficiency.

NetApp EF Series

NetApp’s EF Series is an all-SSD version of the E-Series, a product line that NetApp inherited as part of the Engenio acquisition. There are two models in the EF product line — the EF540, which was launched in early 2013, and the EF550, which was launched in late 2013 with an SSD hardware refresh. The EF Series runs the SANtricity operating system and has its own management GUI. The product supports FC, iSCSI and InfiniBand. NetApp has made changes to the software to monitor SSD wear life and recently expanded the scalable raw capacity to 192TB. The EF Series product doesn’t support any data reduction features. Existing NetApp OnCommand suite customers cite the need for improvement in the SANtricity management console. Given the focus of the EF Series on high-bandwidth workloads, InfiniBand has been a prominent interface, but now FC implementations have become the predominant interface as end-user acceptance of the product broadened. The long-term viability of the EF Series as a product line will remain in question, with NetApp’s all-new FlashRay set for launch toward the end of the year with potentially better data services and manageability.

Nimbus Data Gemini

Nimbus has developed a purpose-built unified array from the ground up, and it features the broadest protocol support in the industry. The Gemini array is versatile due to its Halo OS, which is the epicenter for its data services and multiprotocol support, most notably including InfiniBand. The Halo operating system also offers a wide suite of data services. However, client feedback on the full depth of capability has been mixed and requires further diligence, especially the flexible and selectable data reduction features that we recommend should be verified in a proof of concept.

These arrays are cost-effective, given the use of advanced cMLC NAND designed directly into hot-swappable modules for the array. The usage of cMLC NAND and parallel memory architecture delivers resiliency across a wide spectrum of capacities ranging from 3TB to 48TB raw capacity. This approach not only provides considerable density in the 2U enclosures, but also has efficient power consumption, making it attractive from an opex perspective. The recent availability of the scale-out Gemini X-series has redundant directors that enable 10 nodes, reaching 960TB raw capacity. While the user interface and manageability is adequate, QoS features will need to evolve for this scale-out architecture.

Pure Storage FA Series

Pure Storage has focused on creating a purpose-built SSD-optimized storage array and controller software, which uses low-cost/low-capacity PC SSD cMLC media. Pure Storage is on its third-generation product, built on a foundation of granular block in-line deduplication and compression at a 512-byte level that allows compelling data reduction in workloads with various block sizes. Pure Storage has a good reputation for reliability, ease of use and extensive storage data services that now feature asynchronous replication. Overall, the arrays have relatively low raw capacities. The second-generation arrays, such as the 35TB FA-420, and the current, third-generation, more competitive 70TB FA-450 are based on Dell hardware. Only FC and iSCSI block protocols are supported with 16 Gbps FC on the FA-450 and the slower 8 Gbps FC on the FA-420. Traditional QoS features are not available, but consistent performance is provided by internal timing that skews I/Os toward the better-performing SSD. All data services are included in the base price of the array, plus product satisfaction guarantees are provided and controller investment protection is offered via the Forever Flash program if support and maintenance contracts are maintained.

Skyera skyHawk

Skyera designs its own controllers and software, and has developed its own wear-leveling algorithms to enhance and improve cMLC NAND reliability. The skyHawk is a relatively new entry-level iSCSI and NFS SSA with the extensive data services, but only a single power supply and controller. The dense packaging and exploitation of the most advanced consumer MLC NAND technology enables the product to be the most aggressively priced on a usable capacity basis, with prices starting at under $3 per GB formatted capacity before data reduction. Combined with in-line compression software, which is done only in hardware and not at the system level, this increases usable capacity. Array reporting is oriented toward internal metrics such as logical unit number (LUN) input/output operations per second (IOPS), bandwidth and latency. skyHawk also offers sophisticated array partitioning QoS features.

Firmware upgrades require a reboot and therefore an outage. Overall, from a hardware and single point of failure, this is not an enterprise data center array unless two or more Skyera arrays are mirrored or striped using a higher abstraction layer. Skyera is working with partners such as DataCore Software to provide integration certification to mitigate some of the platform compatibility and high-availability challenges. A dual-controller skyEagle array is in development, featuring 326TB of raw capacity in 1U, but given product delays with skyHawk, the already-announced skyEagle will also be subject to delays as well.

SolidFire SF Series

SolidFire sells scale-out solid-state arrays with a primary focus on service providers and large-enterprise customers. By leveraging external cMLC-based PC SSDs with in-line data reduction features, SolidFire is able to deliver competitive price/performance in a scale-out architecture. SolidFire’s product is differentiated in the marketplace through QoS, where applications are delivered with guaranteed IOPS. SolidFire’s QoS feature provides the ability to set minimum, maximum and burst performance settings, which enables enterprises and service providers to offer differentiated services. The product has close integration with common hypervisors, and the REST-based API support is commendable for a young company. The vendor offers broad support for cloud management platforms such as OpenStack, CloudStack and VMware, as well as support for public cloud APIs such as S3 and Swift. The product relies on a distributed replication algorithm rather than redundant array of independent disks (RAID) for data protection, which reduces rebuild times and creates a self-healing infrastructure.

Focus on enterprise private clouds and integration with traditional applications is fairly nascent and needs further development. Most current deployments are iSCSI-based, with FC support being only recently introduced.

Violin Memory 6000 Series All Flash Array

Violin has its own unique architecture based on its NAND chip-level expertise that it uses in its own Peripheral Component Interconnect Express (PCIe)-based memory module configurations that are organized and aggregated to enable a relatively dense array with strong performance and guaranteed sub-500-μs latency. Violin is one of the most cost-effective vendors on a raw $ per GB basis due to its usage of cMLC technology at advanced process geometries, allowing raw capacity up to 70TB in 3U and scale up to 280TB. Violin has strong block and file support and good ecosystem interoperability.

Violin has only recently introduced (in its June Concerto 7000 announcement) a more cohesive suite of data management features that can be upgraded from an existing 6000 series with a service disruption. The Concerto enhancements provide greater business continuity via remote asynchronous and synchronous replication along with mirroring and clones. Although Violin did introduce thin provisioning and snapshots, data reduction was only recently introduced on August 19 (and is not considered in the ratings). In June 2014, Violin also announced its Windows Flash Array, featuring tight integration with Microsoft Windows protocols and services that include data reduction, but this was not included in the rating. Pricing for data services is not fully inclusive and will require additional charges for certain features such as mirroring and replication.


HDD-based general-purpose storage arrays have stagnated in performance compared to the order of magnitude of performance improvement of CPUs within servers. SSAs, which use SSDs instead of HDD, have addressed this performance imbalance by improving storage IOPS and latency performance by an order of magnitude or sometimes two orders of magnitude. While SSDs themselves are not new and have been available for decades, SSAs are new external storage offerings, which have been specifically designed or marketed to exploit the reduced cost and improved performance of NAND SSD. Latency or response time is what customers are mainly concerned with, but bandwidth or throughput is also improved by SSA. The reduced latency has also enabled new technologies such as in-line primary data reduction, deduplication, compression or both. These features were restricted by the mechanical constraints of HDDs. The reduced environmental requirements of SSAs such as power and cooling also have incidental and important advantages over general-purpose arrays and other HDD-based storage systems.

Product/Service Class Definition

The following description and criteria classify solid-state array architectures by their externally visible characteristics rather than vendor claims or other nonproduct criteria that may be influenced by fads in the solid-state array storage market.

Solid-State Array

The SSA category is a new subcategory of the broader external controller-based (ECB) storage market. SSAs are scalable, dedicated, solutions based solely on solid-state semiconductor technology for data storage that cannot be configured with HDD technology at any time. The SSA category is distinct from SSD-only racks within ECB storage arrays. An SSA must be a stand-alone product denoted with a specific name and model number, which typically (but not always) includes an operating system and data management software optimized for solid-state technology. To be considered a solid-state array then, the storage software management layer should enable most, if not all, of the following benefits: high availability, enhanced-capacity efficiency (perhaps through thin provisioning, compression or data deduplication), data management, automated tiering within SSD technologies and, perhaps, other advanced software capabilities, such as application and OS-specific acceleration based on the unique workload requirements of the data type being processed.

Scale-Up Architectures

  • Front-end connectivity, internal bandwidth and back-end capacity scale independently of each other.
  • Logical volumes, files or objects are fragmented and spread across user-defined collections such as solid-state pools, groups or RAID sets.
  • Capacity, performance and throughput are limited by physical packaging constraints, such as the number of slots in a backplane and/or interconnected constraints.

Scale-Out Architectures

  • Capacity, performance, throughput and connectivity scale with the number of nodes in the system.
  • Logical volumes, files or objects are fragmented and spread across multiple storage nodes to protect against hardware failures and improve performance.
  • Scalability is limited by software and networking architectural constraints, not physical packaging or interconnect limitations.

Unified Architectures

  • These can simultaneously support one or more block, file and/or object protocols, such as FC, iSCSI, NFS, SMB (aka CIFS), FCoE and InfiniBand.
  • Both gateway and integrated data flow implementations are included.
  • These can be implemented as scale-up or scale-out arrays.

Gateway implementations provision block storage to gateways implementing NAS and object storage protocols. Gateway style implementations run separate NAS and SAN microcode loads on either virtualized or physical servers, and consequently, have different thin provisioning, auto-tiering, snapshot and remote copy features that are not interoperable. By contrast, integrated or unified storage implementations use the same primitives independent of protocol that enables them to create snapshots that span both SAN and NAS storage and dynamically allocate server cycles, bandwidth and cache based on QoS algorithms and/or policies.

Mapping the strengths and weaknesses of these different storage architectures to various use cases should begin with an overview of each architecture’s strengths and weakness and an understanding of workload requirements (see Table 1).

Table 1. Solid-State Array Architecture
Strengths Weaknesses
  • Mature architectures:
    • Reliable
    • Cost-competitive
  • Large ecosystems
  • Independently upgrade:
    • Host connections
    • Back-end capacity
  • May offer shorter recovery point objectives RPOs over asynchronous distances
  • Performance and bandwidth do not scale with capacity
  • Limited compute power can make a high impact
  • Electronics failures and microcode updates may be high-impact events
  • IOPS and GB/sec scale with capacity
  • Nondisruptive load balancing
  • Greater fault tolerance than scale-up architectures
  • Use of commodity components
  • There are high electronics costs relative to back-end storage costs.
  • Maximal deployment flexibility
  • Comprehensive storage efficiency features
  • Performance may vary by protocol (block versus file).

Source: Gartner (August 2014)

Critical Capabilities Definition


This refers to the ability of the platform to support multiple protocols, operating systems, third-party ISV applications, APIs and multivendor hypervisors.


This refers to the automation, management, monitoring, and reporting tools and programs supported by the platform.

These tools and programs can include single-pane management consoles, monitoring and reporting tools designed to help support personnel to seamlessly manage systems, and monitor system usage and efficiencies. They can also be used to anticipate and correct system alarms and fault conditions before or soon after they occur.

Multitenancy and Security

This refers to the ability of a storage system to support a diverse variety of workloads, isolate workloads from each other, and provide user access controls and auditing capabilities that log changes to the system configuration.


This is the collective term that is often used to describe IOPS, bandwidth (MB/second) and response times (milliseconds per I/O) that are visible to attached servers.


Reliability, availability and serviceability (RAS) refers to a design philosophy that consistently delivers high availability by building systems with reliable components and “de-rating” components to increase their mean times between failures (MTBFs).

Systems are designed to tolerate marginal components, hardware and microcode designs that minimize the number of critical failure modes in the system, serviceability features that enable nondisruptive microcode updates, diagnostics that minimize human errors when troubleshooting the system, and nondisruptive repair activities. User-visible features can include tolerance of multiple disk and/or node failures, fault isolation techniques, built-in protection against data corruption, and other techniques (such as snapshots and replication; see Note 1) to meet customers’ recovery point objectives (RPO) and recovery time objectives (RTO).


This refers to the ability of the storage system to grow not just capacity, but performance and host connectivity. The concept of usable scalability links capacity growth and system performance to SLAs and application needs (see Note 2).

Storage Efficiency

This refers to the ability of the platform to support storage efficiency technologies, such as compression, deduplication and thin provisioning, to improve utilization rates while reducing storage acquisition and ownership costs.

Use Cases


This is an average of the following use cases. Please refer to Table 2 for the weightings of the use cases.

Online Transaction Processing

This use case is closely affiliated with business-critical applications, such as database management systems (DBMSs).

DBMSs require 24/7 availability and subsecond transaction response times — hence, the greatest emphasis is on performance and RAS features. Manageability and storage efficiency are important because they enable the storage system to scale with data growth while staying within budget constraints.

Server Virtualization

This use case encompasses business-critical applications, back-office and batch workloads, and development.

The need to deliver low I/O response times to large numbers of virtual machines or desktops that generate cache-unfriendly workloads, while providing 24/7 availability, heavily weights performance and storage efficiency, followed closely by RAS.

High-Performance Computing

High-performance computing (HPC) clusters can be made of large numbers of servers and storage arrays, which together deliver high compute densities and aggregated throughput.

Commercial HPC environments are characterized by the need for high throughput and parallel read-and-write access to large volumes of data. Performance, scalability and RAS are important considerations for this use case.


This use case applies to all analytic applications that are packaged or provide business intelligence (BI) capabilities for a particular domain or business problem.

It does not apply to only storage consumed by big data applications using map/reduce technologies (see definition in “Hype Cycle for Advanced Analytics and Data Science, 2014″).

Virtual Desktop Infrastructure

Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server.

VDI is a variation on the client/server computing model, sometimes referred to as server-based computing. Performance and storage efficiency (in-line data reduction) features are heavily weighed for this use case for which solid-state arrays are emerging as a popular alternative.

Inclusion Criteria

  • It must be a self-contained, SSD-only system that has a dedicated model name and model number.
  • The SSD-only system must be exactly that. It must be initially sold with 100% SSD and cannot be reconfigured, expanded or upgraded at any future point in time with any form of HDDs within expansion trays via any vendor special upgrade or specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.
  • The vendor must sell its product as stand-alone product, without the requirement to bundle it with other vendors’ storage products in order for the product to be implemented in production.
  • Vendors must be able to provide at least five references to Gartner that can be successfully interviewed by Gartner. At least one reference must be provided from each geographic market (Asia/Pacific, EMEA and North American) or the two within which the vendor has a presence.
  • The vendor must provide an enterprise-class support and maintenance service, offering 24/7 customer support (including phone support). This can be provided via other service organizations or channel partners.
  • The company must have established notable market presence, as demonstrated by the amount of terabytes sold, the number of clients or significant revenue.
  • The product and a service capability must be available in at least two of the following three markets — Asia/Pacific, EMEA and North American — by either direct or channel sales.

The solid-state arrays evaluated in this research include scale-up, scale-out and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, forecast growth rates and asset management strategies.

Although this SSA critical capabilities research represents vendors whose dedicated systems meet our inclusion criteria, ultimately, it is the application workload that governs which solutions should be considered, regardless of any criteria. The following vendors and products were considered for this research but did not meet the inclusion criteria, despite offering SSD-only configuration options to existing products. The following vendors may still warrant investigation based on application workload needs for their SSD-only offerings: American Megatrends, Dell Compellent, EMC VMAX, Fusion-io ION (recently acquired by SanDisk), Fujitsu Eternus DX200F, Hitachi Unified Storage VM, IBM DS8000, NetApp FAS, Oracle ZFS and Tegile T-series.

Table 2. Weighting for Critical Capabilities in Use Cases
Critical Capabilities Overall Online Transaction Processing Server Virtualization High-Performance Computing Analytics Virtual Desktop Infrastructure
Performance 29.0% 30.0% 20.0% 42.0% 25.0% 30.0%
Storage Efficiency 16.0% 15.0% 20.0% 5.0% 15.0% 25.0%
RAS 17.0% 20.0% 15.0% 15.0% 20.0% 15.0%
Scalability 11.0% 8.0% 10.0% 15.0% 18.0% 4.0%
Ecosystem 7.0% 7.0% 10.0% 3.0% 5.0% 8.0%
Multitenancy and Security 6.0% 5.0% 5.0% 10.0% 6.0% 5.0%
Manageability 14.0% 15.0% 20.0% 10.0% 11.0% 13.0%
Total 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%
As of August 2014

Source: Gartner (August 2014)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

Each product or service that meets our inclusion criteria has been evaluated on several critical capabilities on a scale from 1.0 (lowest ranking) to 5.0 (highest ranking). Ratings are listed in Table 3, below.

Table 3. Product/Service Rating on Critical Capabilities
Product or Service Ratings HP 3PAR StoreServ 7450 Violin Memory 6000 Series
All Flash Array
Huawei OceanStor Series NetApp EF Series EMC VNX-F Pure Storage FA Series Nimbus Data Gemini IBM FlashSystem V840 Cisco UCS Invicta Series Kaminario K2 SolidFire SF Series Skyera skyHawk EMC XtremIO
Performance 3.1 3.7 3.1 3.2 3.2 3.3 3.4 3.5 3.1 3.7 3.3 3.3 3.6
Storage Efficiency 3.2 2.8 1.9 2.1 2.5 4.2 3.3 3.2 2.8 3.4 3.7 2.9 3.4
RAS 3.5 3.2 3.1 3.1 3.2 3.4 3.4 3.4 2.9 3.4 3.4 2.4 3.1
Scalability 3.5 3.4 2.8 3.1 3.2 2.8 3.2 3.1 3.0 3.4 4.0 3.2 3.6
Ecosystem 3.8 3.1 2.7 2.6 3.9 3.2 3.0 3.3 2.6 3.0 2.9 2.5 3.0
Multitenancy and Security 3.6 2.9 2.6 3.0 3.2 3.3 3.2 3.2 2.6 2.9 3.5 2.6 3.0
Manageability 3.2 2.8 2.7 3.1 3.0 3.4 2.9 2.9 2.4 2.9 3.2 2.4 3.0
As of August 2014

Source: Gartner (August 2014)

Table 4 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 4. Product Score in Use Cases
Use Cases HP 3PAR StoreServ 7450 Violin Memory 6000 Series All Flash Array Huawei OceanStor Series NetApp EF Series EMC VNX-F Pure Storage FA Series Nimbus Data Gemini IBM FlashSystem V840 Cisco UCS Invicta Series Kaminario K2 SolidFire SF Series Skyera skyHawk EMC XtremIO
Overall 3.32 3.22 2.76 2.93 3.11 3.41 3.25 3.28 2.84 3.36 3.43 2.85 3.32
Online Transaction Processing 3.32 3.22 2.78 2.94 3.11 3.42 3.26 3.28 2.84 3.36 3.40 2.83 3.31
Server Virtualization 3.34 3.14 2.69 2.87 3.09 3.46 3.21 3.23 2.79 3.30 3.42 2.78 3.28
High-Performance Computing 3.31 3.35 2.89 3.07 3.17 3.29 3.28 3.31 2.91 3.41 3.44 2.95 3.38
Analytics 3.34 3.23 2.77 2.94 3.11 3.37 3.26 3.27 2.87 3.37 3.49 2.86 3.34
Virtual Desktop Infrastructure 3.30 3.18 2.68 2.84 3.06 3.53 3.26 3.29 2.84 3.37 3.41 2.85 3.32
As of August 2014

Source: Gartner (August 2014)

Gartner Magic Quadrant for Integrated Systems – 16 June 2014

Magic Quadrant for Integrated Systems

Figure 1. Magic Quadrant for Integrated Systems

16 June 2014 ID:G00252466

Analyst(s): Andrew Butler, George J. Weiss, Philip Dawson


The integrated system market is growing at 50% or more per year, creating an unusual mix of major vendors and startups to consider. This new Magic Quadrant will aid vendor selection in this dynamic sector.


Market Definition/Description

This document was revised on 27 June 2014. The document you are viewing is the corrected version. For more information, see the Corrections page on

Integrated systems are combinations of server, storage and network infrastructure, sold with management software that facilitates the provisioning and management of the combined unit. The market for integrated systems can be subdivided into broad categories, some of which overlap. Gartner categorizes these classes of integrated systems (among others):

  • Integrated stack systems (ISS) — Server, storage and network hardware integrated with application software to provide appliance or appliancelike functionality. Examples include Oracle Exadata Database Machine, IBM PureApplication System and Teradata.
  • Integrated infrastructure systems (IIS) — Server, storage and network hardware integrated to provide shared compute infrastructure. Examples include VCE Vblock, HP ConvergedSystem and IBM PureFlex System.
  • Integrated reference architectures — Products in which a predefined, presized set of components are designated as options for an integrated system whereby the user and/or channel can make configuration choices between the predefined options. These may be based on an IIS or ISS (with additional software, or services to facilitate easier deployment). Other forms of reference architecture, such as EMC VSPEX, allow vendors to group separate server, storage and network elements from a menu of eligible options to create an integrated system experience. Most reference architectures are, therefore, based on a partnership between hardware and software vendors, or between multiple hardware vendors. However, reference architectures that support a variety of hardware ingredients are more difficult to assess versus packaged integrated systems, which is why they are not evaluated by this research.
  • Fabric-based computing (FBC) — A form of integrated system in which the overall platform is aggregated from separate (or disaggregated) building-block modules connected over a fabric or switched backplane. Unlike the majority of IIS and ISS solutions, which group and package existing technology elements in a fabric-enabled environment, the technology ingredients of an FBC solution will be designed solely around the fabric implementation model. So all FBCs are an example of either an IIS or an ISS; but most IIS and ISS solutions available today would not yet be eligible to be counted as an FBC. Examples include SimpliVity, Nutanix and HP Moonshot System.

Added market complexity is created because integrated systems of different categories are frequently evaluated against each other in deal situations. For instance, because IIS solutions are generic multipurpose systems that can run a variety of workloads, it is common for one IIS to be compared with another. But users who want to deploy a specific workload might compare an ISS solution, like Oracle Exadata Database Machine or IBM PureApplication System (both of which have the workload embedded), with a generic IIS system that is also capable of running the workload, or with an IIS platform that has an applicable reference architecture. However, it would be rare to see one ISS competing with another ISS, because the choice of stacks and workload takes priority over the choice of platform. So if Oracle Database Management System (DBMS) serving is the required workload, the only viable ISS solution would be an Oracle Engineered System.

It is because these different types of systems are evaluated against each other that this Magic Quadrant assesses integrated systems as integrated infrastructure systems or the infrastructure aspects of integrated stack systems. It assesses the hardware (server, network, storage), operating system and virtualization software alongside any associated management tools and high-availability (HA) solutions. It considers hardware depth and scale, software stack management breadth and depth, and support of the infrastructure, as well as flexibility in the use of reference architectures. It does not assess any software stack, application or platform components individually, such as middleware, DBMS software and cluster software in the application or DBMS tiers.

Most integrated systems are based on blade server technology, with closely coupled storage area network (SAN) and network-attached storage (NAS), which enable boot-from-disk capability for all physical and virtual nodes; thus, the system becomes stateless. Blades are not a prerequisite, however, and some vendors will promote rack-based solutions as well. The majority of integrated systems are the effective packaging of server, storage and networking components that are sold as separate products in their own right. But we are seeing the emergence of true "fabric-based computers" that merge the three elements more seamlessly.

The great majority of integrated systems are based on Intel or AMD x86 technology, but there is some support for reduced instruction set computer (RISC) variants like Power and SPARC, and the emerging market for ARM and Intel Atom processors will have applicability for some integrated system use cases.

Magic Quadrant

Source: Gartner (June 2014)

Vendor Strengths and Cautions


While Cisco’s Unified Computing System (UCS) blade technology integrates compute and switching capability, it is not considered to be a full integrated system, as it currently includes no integrated storage. The FlexPod solution — jointly promoted by Cisco and NetApp — has evolved from its reference architecture beginnings — when FlexPod was really only a certified design — to its current status as a valid integrated system that can be deployed through a variety of Cisco and NetApp partners. FlexPod is the result of many years of joint development between Cisco and NetApp of networking and secure multitenancy. For FlexPod, the two vendors have developed a new support and go-to-market model, resulting in a cooperative support program and a channel-centric go-to-market approach.

While the use of Cisco’s UCS is common to VCE and FlexPod, the underpinning of each FlexPod is always Cisco UCS, NetApp Fabric-Attached Storage (FAS) and Cisco Nexus switches. Introduced in November 2010, there are now more than 3,700 FlexPod customers. FlexPod is delivered almost exclusively through channel partners. There are currently 1,000-plus partners certified to sell FlexPod, and over 90 FlexPod Premium Partners that have achieved a higher level of FlexPod certification.

Market acceptance for FlexPod has been strong, and service providers make up a significant proportion of the base. However, Cisco and NetApp also collaborate with vendors such as VMware, Microsoft, Oracle, SAP, Citrix and Red Hat to create more-focused FlexPod solutions. Microsoft and Citrix have also agreed to participate in the FlexPod Cooperative Support Program, along with Cisco, NetApp and VMware. The FlexPod portfolio has also expanded, with FlexPod Express for small enterprises and FlexPod Select for specialized workloads like Hadoop. The management tooling is not as consistent or holistic as that of many other integrated systems, and many users work with vendors like CA Technologies to create a more complete management experience. Recognizing the need for better management symmetry, Cisco has recently started supporting every FlexPod solution (by default) through its UCS Director (integrated manager) product that covers compute, network and storage setup and management, and a growing number of FlexPod users are now deploying with Cisco UCS Director.


  • FlexPod has a single architecture (the integration of unified compute, network switching and storage) that scales from a small enterprise configuration to a large, secure, multitenant cloud infrastructure without changing architecture or technology platforms.
  • FlexPod has benefited from the fast ramp-up and track record of Cisco UCS, aided by easy access to NetApp’s large customer base.
  • NetApp has good presence in smaller data centers, and works with Cisco to price and position FlexPod for a wide range of use cases, from small or midsize businesses (SMBs) to large enterprises.
  • FlexPod is validated with multiple hypervisors (from Citrix, Microsoft and VMware) and bare metal.
  • Cisco and NetApp are developing use cases for FlexPod that cover a wide range of private cloud and application-specific scenarios, including desktop virtualization (Citrix XenDesktop and VMware), and enterprise applications (including Oracle, SAP and Microsoft), big data (Hadoop).
  • The two vendors have collaborated to create a Cooperative Support Program (which is one factor that qualifies FlexPod as a valid entry in this Magic Quadrant).


  • Because either vendor can take the lead in sales situations, users must validate the rules for account management and responsibility.
  • Users who favor the more holistic single-vendor solution from vendors such as Oracle, IBM and HP will be more likely to question a partner-dependent business model.
  • There is a lack of a uniform management software experience (although certification of Cisco UCS Director is beginning to fill this void).
  • Potential future tensions may be created by Cisco’s entry into the storage market with the acquisition of Whiptail.


Dell had an early flurry into integrated systems with a conceptual "brick based" system that pioneered modular systems, but did not make it to full production. The vendor also forged high-end OEM partnerships with Egenera and Unisys to help fill the need for solution integration and infrastructure convergence. Dell has now developed a strong system pedigree, through the storage acquisitions of Compellent Technologies and EqualLogic, as well as the Force10 Networks networking acquisition. Dell’s main integrated infrastructure focus is its PowerEdge VRTX and Active System offerings. Active System is targeted at virtualization and cloud infrastructure, packaged for various virtual machine (VM) configurations. Dell supports VMware and Microsoft Hyper-V for hypervisor and associated software for recovery and migration. Multiple management software, infrastructure automation and storage vendor acquisitions, such as Quest Software and Gale Technologies, are enabling Dell to build a management portfolio that addresses a wide range of needs, including common management across applications for both physical and virtual infrastructure. This is a work in progress, with many tools and interfaces yet to harmonize.

Dell PowerEdge VRTX offers an integrated server, storage and networking in a compact chassis, optimized for office environments. This shared infrastructure platform offers performance and capacity with office-level acoustics in a single, compact tower chassis. It is aimed at SMBs, as well as remote and branch offices of large enterprises. Full-functioned unified system management with Chassis Management Controller (CMC) and GeoView helps take much of the time and effort out of system administration and control. This makes it feasible to deploy, monitor, update and maintain the system through a unified console that covers servers, storage and networking. VRTX system management is integrated with major third-party management tools, protecting your installed investments and allowing you to use what you know. For example, Dell developed OpenManage Cluster Configurator for Window 2012, and Microsoft Hyper-V clusters based on VRTX. Likewise, clients using third-party consoles (such as Microsoft, VMware or Oracle) link well into Dell’s embedded management for deployment and automation of infrastructure resources.

Although Dell is now a private company, we do not believe that this change in focus has a measured impact on its enterprise server business or its inclusion in this Magic Quadrant.


  • The vendor has strong individual product sets, especially PowerEdgeVRTX and Active System.
  • There is a good cloud and virtualization platform around Active System, the result of Dell’s acquisition and integration of Gale Technologies.
  • Strong SMB and remote location play is provided by PowerEdge VRTX.
  • There is good integration of EqualLogic blade array storage for workloads such as virtual desktop infrastructure (VDI) and analytics.


  • A fragmented uber-strategy makes the integrated system portfolio appear disjointed, with no common management.
  • There is limited enterprise credibility for supporting software stacks on top of integrated infrastructure.
  • Despite numerous acquisitions, Dell still lacks awareness in management software, compared with many of its peers.
  • Despite a strong range of server and storage options, Dell’s breadth of options for networking and switching is more limited.


Fujitsu has a long history in integrated systems in its core Central European and Japanese markets, and some measured success in the U.K., Spain, Italy and Finland. It is less established in the Americas and across the rest of Europe and Asia. Even with this geographic challenge, Fujitsu is able to develop and differentiate, as well as to partner with other technology, software and solution vendors. This broad partnering capability, while being a core strength for solutions, also creates the impression of a fragmented portfolio that hedges all bets, rather than delivering infrastructure focus. This is highlighted by the fragmented management tooling across Fujitsu’s offerings.Fujitsu’s efforts and delivery models fall into three main categories:

Historical: These offerings focus around FlexFrame and PAN Manager. The FlexFrame Orchestrator for SAP is a coupling of integrated infrastructure, server storage and network, as well as an SAP workload and Oracle DBMS. FlexFrame was a visionary integrated stack built on integrated infrastructure and a proven, complete ERP solution supporting Oracle and SAP components. Now, FlexFrame has been overcome in the market by more competitive mainstream IIS and ISS offerings, and indeed Fujitsu’s own broadening of reference architectures. The Fujitsu PAN Manager for Primergy is a niche offering built on Fujitsu’s long-established relationship with Egenera. It offers a technically interesting capability to manage and control blades, and interconnects to storage.

Appliances: Fujitsu sells multiple appliances that address specific workloads, including Cluster-in-a-box, SQL Server Data Warehouse Appliance and Check Point Integrated Appliance. Further announcements are planned later in 2014. While these are strong offerings, the volume of these systems is low, in keeping with the entire Fujitsu Integrated System portfolio. However, the value and fit for each system suit the local core market needs for Fujitsu.

Reference Architectures: Fujitsu has several reference architectures, including DynamicFabric, RapidStructure SharePoint and vShape. These are aligned to the individual offerings integrated by Fujitsu, but beyond the scope of this integrated system assessment.


  • Fujitsu sells a broad combination of integrated systems, including appliances and reference architectures.
  • The vendor demonstrated a good commitment to local core markets — predominantly German-speaking countries, Western Europe and Japan.
  • Fujitsu integrated systems are well-engineered and proven — some over many years.


  • To date, Fujitsu has achieved limited success penetrating global markets outside its core markets, especially in North America.
  • Lack of a consistent brand image makes the Fujitsu portfolio appear complex and harder to differentiate.
  • As a company, Fujitsu exhibits a strong product engineering ethos, with limited product marketing flair.

Hitachi Data Systems

Hitachi Data Systems is the unit within the greater Hitachi organization that is taking a new and more aggressive approach to the system market. The parent company is best-known for systems, software and services, particularly those customized for transportation, energy, construction, manufacturing and medical systems, and mostly in Japan. The new major thrust is toward fully verticalized offerings across all industries, versus discrete products, including the management infrastructure for an integrated system solution. Hitachi has not been known as a major software vendor globally, but it has developed infrastructure software as part of its integrated system effort under the branding of Unified Compute Platform (UCP) since 2010. Hitachi is now in the midst of tying together its hardware, software and services into a unified strategy under UCP. The initiative includes a new marketing campaign, with more consistent branding, aimed at higher-end core, mission-critical applications within enterprise accounts.

Hitachi delivers preconfigured and integrated systems with UCP Director automation and orchestration software tightly integrated to VMware (and other hypervisors) as a single support model. Hitachi also offers preconfigured models that include its own servers and storage, with the addition of reference architectures of alternate servers, such as Cisco (with Hitachi storage). Hitachi is able to deliver the software stacks of VMware, Microsoft, Citrix, SAP and Oracle under one common toolset; this now extends to SAP Hana as well. UCP Director’s scope has been broadened to be integrated with VMware and Microsoft, as well as to introduce support for the Cisco UCS blade technology, with unified supported and validated configurations. SAP Hana integrated platforms have been validated by SAP and delivered in production deployments. Finally, on the hardware front, Hitachi’s mainframe experience has been used in its server hardware design. It delivers hardware logical partitions to complement hypervisors and achieve secure multitenancy, symmetric multiprocessing (SMP) blade scaling and failover similar to the isolation, security and transaction failover capabilities of Unix systems, but on x86 infrastructure. Early client inquiries have been positive about the resilience and failover of the UCPs.


  • Hitachi is a highly regarded vendor with a strong technology base.
  • The vendor has a reputation for quality.
  • Hitachi is a globally renowned storage vendor, with a broad installed base that can be mined.
  • Hitachi has the financial resources and willingness to expand its geographic presence.
  • The vendor has relationships with Cisco and SAP.
  • Hitachi’s blade technology has strong partitioning capabilities that are similar to those of high-end Unix systems.


  • Hitachi has enjoyed limited sales success and market awareness in the U.S., as an integrated system vendor.
  • Cisco’s growing investment in storage strategies could create rivalries that will weaken the partnership between Hitachi and Cisco.
  • Hitachi has limited direct sales and channel reach, especially outside Japan and other core geographic markets.
  • Partnering with Cisco raises the risk of channel conflict with other Cisco storage vendor alliances (such as Cisco-NetApp and Cisco-EMC).


HP sells a very broad portfolio of integrated systems. While most leverage HP’s established blade server technology and 3PAR storage, there are other systems based on rack-optimized technology and new-generation highly-modular technology in the form of Moonshot. HP has been selling integrated systems since 2008, and has no dependencies on third-party vendors to create a complete integrated system experience (although HP’s CloudSystem can support third-party switches and storage on-demand). Since launch, the portfolio has steadily widened, and this has created the potential for branding and messaging confusion over which systems are most appropriate for which purposes. Consistent articulation of its message across a broad channel remains HP’s greatest challenge. This creates frequent positioning problems both for the emerging market for integrated systems and for the highly established market for HP’s x86 servers.

HP is in the process of simplifying the branding around one primary classification — ConvergedSystem — with use cases varying from basic workload virtualization to specific embedded workloads, such as Microsoft Exchange, SAP Hana and Citrix Virtual Desktop Infrastructure (VDI). To date, HP’s most recognized integrated system brand (with over 1,100 customers) is HP CloudSystem, and that will be retained, but will be more focused on pure private/hybrid cloud use cases. Templates that optimize deployment for specific workloads have been universally named Cloud Maps for the HP CloudSystem; HP is extending this concept with the introduction of App Maps for other integrated system designs, to optimize deployment to noncloud environments. HP will use the "Cloud Maps" term only for cloud-related workloads in the future, preferring to use the "App Map" term for all other workload templates.

To address the desire for a single "software-defined infrastructure," HP launched OneView late in 2013. OneView builds on HP’s already proven server management tools, with the addition of 3PAR storage. Later in 2014, HP will start to add support for HP networking technology, although this effort will extend into 2015 and beyond.

HP is investing in innovations that are likely to cannibalize some of its products over time. In 2013, HP launched a new class of integrated system called Moonshot, focused on scalable application environments where current IT infrastructure is not sustainable in terms of space, energy and cost. This is a true fabric-based computer that currently supports up to 180 servers in a 4 rack unit (U) chassis. Initial designs were based on Intel Atom, but HP has recently launched the ConvergedSystem 100. This Moonshot-based design uses a new APU processor from AMD that combines CPU and graphics processing unit (GPU) capabilities, and is targeted at Citrix VDI or hosted desktop infrastructure (HDI) deployments that require the power of a full desktop, including multiple applications and business graphics. Further Moonshot support for ARM processors and various specialty engines for graphics, security, etc., are all in development. While Moonshot competes with multinode servers that are usually aimed at extreme scale-out workloads, it is a valid integrated system due to the internal switched fabric and integrated storage. At the other end of the workload scale, HP’s ConvergedSystem 900 for SAP Hana is a brand new system that was launched at SAP’s Sapphire conference this year. HP already has over 800 SAP Hana installations across a range of AppSystem-branded configurations, but this new system will address the most challenging instances of SAP Hana deployment, with up to 16 processors and 12TB of memory in each node. More generic instances of the same system will be launched to support more generalized consolidation and Unix migration instances.


  • The vendor’s x86 blade and rack market leadership, plus a large enterprise virtual array (EVA) storage installed base, provide a strong foundation to upsell.
  • HP has respected and widely deployed management tools, with the potential for OneView to expand that reputation still further as the product matures.
  • The vendor has a leading market presence in the burgeoning SAP Hana market.
  • HP offers a very broad portfolio of integrated systems that address multiple use cases.
  • HP has the ability to leverage strong and proven relationships with SAP, VMware, Microsoft, Citrix and other key independent software vendors (ISVs).


  • HP’s breadth of portfolio and inconsistent branding create frequent messaging confusion.
  • Conversations with Gartner clients indicate periodic dissatisfaction with support quality and the degree of vendor commitment/consistency.
  • Gartner observes periodic field execution weaknesses, which tend to be localized to certain geographies.
  • The conversion rate of regular blade-based servers to integrated systems has been slow, given the opportunity presented by the size of the installed base.
  • An occasional lack of assertion still provides competitors with market penetration opportunities into established HP accounts.


Huawei started its enterprise business in 2011, creating a challenge for the vendor to transform from its service provider focus into a balanced provider, catering to global, enterprise and consumer markets. During the past two years, Huawei has shown great ambition and technology vision to break into mature markets. However, because of concerns over national security from some governments, it has encountered major obstacles, predominantly in North America’s national infrastructure projects. In the near future, emerging markets, the Asia/Pacific region and Western Europe will continue to be the strategic focus for Huawei’s data center business. Huawei’s FusionCube is based on the E9000 blade platform, integrated with distributed (scale-out) storage. When augmented by FusionSphere and FusionAccess, this creates a good mix of physical, virtual and private cloud hardware and software combinations, with strength in its integrated capabilities, especially for DBMS and similar workloads. An up-and-coming product in emerging markets, Huawei FusionCube boasts references across the Asia/Pacific region (predominantly China), EMEA and other emerging markets. However, it requires more global customer references.

FusionCube targets telecom customers and enterprise customers who need DBMS and/or cloud infrastructure that has high input/output (I/O) performance: one-stop cloud infrastructure and database/data warehouse platform. For telecom customers, Huawei’s strategy is to leverage existing network infrastructure customer relationships and focus on the telecom’s internal cloud built-up projects and public cloud built-up projects. For enterprise customers, Huawei focuses on named accounts in six vertical industries and does not do mass marketing to all potential customers.

FusionCube’s channels are mainly the high-impact ISVs in vertical industries. Huawei deploys FusionCube as the foundation for ISVs in vertical industries to integrate their products with Huawei’s own solutions. Huawei and ISVs will co-market and promote the solution in vertical industries. There is now ISV support from SUSE, Red Hat and SAP for FusionCube, and more recently from Microsoft, Oracle and VMware. Huawei plans to put FusionCube in existing distribution channels to mass market when converged infrastructure becomes more broadly adopted. FusionCube is appropriate for Huawei software users. Users of other x86 servers and solutions should validate the level of third-party software certification and local support.


  • Huawei markets the FusionCube integrated solution with scale-out storage — alongside FusionSphere and FusionAccess.
  • The vendor invests in tight integration for specific verticals, channel and ISVs — especially for DBMS and private cloud infrastructure.
  • Emerging markets like Brazil, Russia, India and China provide Huawei with opportunities for international expansion.
  • Huawei’s huge addressable market and IT product portfolio create opportunities for cross-pollination from other disruptive technology markets, such as network infrastructure.


  • Huawei’s global presence tends to be highly polarized due to the delicate situation regarding IT security between many Western nations and China; this reduces willingness to invest in a number of countries.
  • There are Microsoft, VMware, Oracle, ISV and third-party software issues around support and certification.
  • Huawei has been challenged to integrate its infrastructure and solutions with many popular third-party technologies commonly deployed in this market.


When the PureSystems brand of integrated systems was launched in April 2012, IBM was perceived to be slow in responding to market demand after HP’s original launch of its integrated system strategy in 2008, and Cisco’s and VCE’s entries in 2010. With its large installed base of hardware and software solutions and a large global reseller presence, IBM took the next two years to organize and hone its sales effort to increase demand generation from the benefits of its PureSystems preintegrated, managed and supported compute-network-storage solutions, and to persuade IT leaders of its superior breadth and depth. The ramp-up of adoption, as indicated by Gartner client inquiry interest, has been slow, but gradually improving toward greater acceptance in 2013. One reason, as expressed by users, has been the complexity of product vision across brands. To market hardware and software solutions of its own and third-party ISV solutions, IBM created several sub-brands: (1) PureFlex (integrated x86 and Power system hardware); (2) PureApplication (application and middleware software deployment agility and performance); (3) PureData (database management and big data analytics); and (4) Flex Systems (channel and IT customized system integration of x86 and Power). As part of PureApplication, solutions were offered for cloud (SmartCloud Entry), big data and Hadoop, application and process optimization (Expert Integrated Systems) and mobile applications (Mobile Application Platform Pattern for PureApplication). In addition, IBM sought to integrate, consolidate and manage its platform diversity via Flex System Manager for Power/AIX, and IBM System i and System x server (BladeCenter) technologies under the PureSystems brand. We found users initially impressed with the technical integration and performance, but they were also confused by IBM’s positioning. For example, IT organizations were dealing with a PureSystems sales team in addition to their traditional x86 and Power server sales teams, which were not always synchronized to optimize the users’ benefits. Concurrently, the portfolio of both nonintegrated and integrated systems expanded with differing Power and x86 configuration options.

PureSystems momentum built steadily throughout 2013 as the vendor added reference architecture validations for SAP Hana, Microsoft Hyper-V and KVM solutions to its VMware deliverables. However, Gartner research and client inquiries indicate that most sales have been hardware-focused solutions of PureFlex and Flex Systems as a convenient account entree. IBM has been building channel interest and expertise for PureApplication’s value proposition. ISV solution providers are especially strong in the financial, healthcare, energy and retail industries. The vendor has also been training and certifying integration partners and ISVs on a broad and global landscape. Nevertheless, Gartner experiences fewer inquiries on these systems and they still represent a relatively small proportion of total PureSystems sales.

In January 2014, IBM announced that Lenovo will acquire IBM’s x86 server business. While the announcement occurred after the research for this Magic Quadrant was underway, the event nevertheless bears on the Magic Quadrant vendor evaluation. This sale, which is expected to close later in 2014, will result in Lenovo supplying the underlying x86 server technologies for PureFlex and Flex Systems. IBM resources will be leveraged by Lenovo to maintain and provide services (from 7,500 IBM employees going to Lenovo), while IBM will retain exclusive ownership, development and sales of PureApplication and PureData. All x86 server components in the Pure brands will be supplied by Lenovo, who will also compete as an integrated systems supplier under the PureFlex and Flex Systems labels. The relevance to users will be in the coordination by the two companies at several levels: maintenance, technical support, consulting services and, most importantly, road map and product directions. Their tight alignment on life cycle management and upgrades will be essential in continuing the forward momentum established by IBM. IBM must reduce perceived IT concerns that Gartner is detecting among clients, while accelerating its momentum with PureSystems.

In addition to its software enablement strategy under PureApplication and PureData, IBM must accelerate adoption of Flex System Manager as a fabric resource pool manager to compete with other fabric resource pool managers (FRPMs). Flex System Manager is part of PureFlex, PureApp and PureData, but is optional on Flex Systems, with a build-it-yourself strategy. IBM’s PureSystems revenue without the x86 compute components will be more dependent on software, service and cloud-generated revenue, with a shrinking contribution from hardware.


  • IBM has a broad portfolio of chassis and blade options with strong blade market share.
  • The vendor has a strong software and service portfolio.
  • IBM has a leading market presence in the rapidly growing SAP Hana market.
  • PureApplication is a good platform for IBM to leverage its proven strengths in middleware, application and cloud performance optimization.
  • IBM’s integrated systems offer a broad portfolio across applications and infrastructures.
  • There is consolidation and integration among some IBM technology silos.
  • There is broad ISV/reseller support of PureApplication and PureData.


  • Buyers need to assess the potential impact of the Lenovo acquisition on the PureFlex and Flex Systems road maps.
  • Gartner inquiries reflect client concerns regarding future hardware portfolio rationalization.
  • Inquiries also show unease regarding IBM’s long-term stability and continuity.
  • There is a perceived shifting focus by IBM marketing to Power Systems and System z as the homegrown innovation engines of growth.


Nutanix is a privately funded vendor of new-generation technology that started shipping products in 2011. The vendor is venture-funded, and has raised $173 million in funding so far. The Nutanix NX family offers four different models of highly integrated systems, which address enterprise and branch general-purpose needs, plus systems that are optimized for big data or graphics-intensive workloads. So far, over 5,200 units have been delivered in more than 30 countries, of which 95% are deployed in production environments. As is to be expected, about three-quarters of shipments favor North America, with about 15% going to EMEA and 10% elsewhere.

Nutanix works through channel partners to implement new systems, particularly in those countries outside North America. But regardless of final destination, systems are always factory-built and integrated. Nutanix has recruited 750 partners so far, including several major international distributors. The Nutanix technology differs architecturally from most other vendors, in that the storage and compute elements are natively converged to create a much tighter level of integration. This node-based approach enables theoretically limitless additions of new compute or storage bandwidth in very small increments. Nutanix has patented this distributed software architecture that is used to add resource at a very granular level, with rapid provisioning of new hardware and orchestration with required workloads. Although based on a switched topology, Nutanix is vendor-agnostic and supports Ethernet switches from multiple vendors.

Nutanix has close working relationships with multiple top software vendors, and workloads like VDI, Hadoop and DBMS servers are well-represented among the installed base. Maximum neutrality is a major focus for Nutanix, as it works to build trust across a wide variety of vendors. The vendor frequently targets specific workload needs to penetrate new accounts, and then expands the workload reach to compete with incumbent vendors as client confidence is built. Nutanix claims that 50% of first-time clients expand their configurations within six months (and 70% do so within 12 months).


  • Nutanix has a highly innovative and scalable architecture that is generationally advanced compared with most rivals.
  • The vendor’s highly modular designs allow the easy addition of new server and storage resources.
  • The Nutanix technology is certified to support a wide range of virtualization, operating system and software stack options.
  • Nutanix has an impressive reference client list across many vertical industries and geographies.
  • The vendor gets very positive client feedback.


  • Nutanix is a venture-funded startup with a relatively short presence in the market.
  • International clients should validate the presales and postsales capabilities of local channel partners, and insist on talking with reference clients in their region.
  • The tight integration of compute and storage makes it particularly important for users to create collaboration between different administration teams.


Oracle has taken a different approach to the integrated system market, by focusing on Oracle software workloads as the dominant use case. Most vendor strategies concentrate on hardware-level integration, creating generic systems that can run many workloads. ISS vendors, however, take this integration to the next level by integrating one or more software layers — aiming to provide more value to customers who choose to deploy them. With the exception of the recently launched Oracle Virtual Compute Appliance, Oracle’s Engineered System strategy is targeted at a range of Oracle workloads. Oracle Exadata Database Machine has been — by far — the most successful product to date, and is aimed at both online transaction processing (OLTP) and data warehousing DBMS workloads. But other products, such as Oracle Exalogic (aimed at the market for Fusion-based Web and application serving workloads), Oracle Big Data Appliance and Oracle Database Appliance, are gradually penetrating their respective stack-specific markets.

Oracle also differs from its peers by focusing on rack-optimized server nodes, plus numerous technology innovations, such as integrated flash memory, a strong InfiniBand switch topology and hybrid columnar compression, that help to optimize application performance. In Gartner client inquiries, the great majority of users are very satisfied with the performance and functionality of their Oracle software workloads running on Oracle Engineered Systems. While the great majority of integrated systems that the vendor ships are based on Intel x86 and Oracle’s own Linux distribution, the SPARC-based SuperCluster is also branded as an Engineered System, and ships with the Oracle Exadata Database Machine storage engine that enables strong DBMS performance in a RISC-based design.

By tightly integrating the software stack, Oracle’s integrated systems create additional challenges for some data centers. All integrated systems are capable of creating tensions between different administrators — server, storage, networking, virtualization, etc. Pricing policy becomes similarly challenged, as it can become more difficult to do an "apples to apples" pricing comparison with other integrated systems that run on an Oracle software stack. So, by embedding the software stack, Oracle’s products demand the close participation of the lines of business, as well as procurement specialists, in the buying decision.

With only a small minority of the total Oracle software community addressed to date by Oracle’s integrated systems, there is plenty of scope for Oracle to sell more systems. Ongoing surveys among Gartner clients indicate that most users still favor the generic hardware approach, so Oracle (and other vendors of integrated stack systems) must overcome fears of placing too much trust in and dependency on any one vendor. But the vendor has achieved clear market leadership for the use cases where users are willing to invest in an appliancelike solution that delivers very good workload performance, while greatly simplifying the task of workload integration and management — even when that comes with the potential for increased vendor lock-in.


  • Oracle offers a strong (and growing) portfolio of integrated systems.
  • There are proven performance benefits for Oracle software workloads
  • The vendor has close alignment of its software and hardware strategies.
  • Oracle has an opportunity to cross-sell and upsell into its installed base.
  • The vendor has aggressive marketing and product strategies.
  • Oracle enables application owners to become an effective buying center and point of administration for its Engineered Systems.


  • Gartner client inquiries demonstrate a fear of a greater degree of vendor lock-in when integrated systems extend to include the software stack.
  • Oracle’s Engineered Systems are perceived as relatively expensive.
  • There is a greater risk of conflict if line-of-business administrator roles are not synchronized with the aims of the data center administrators.
  • Potential customers need to validate references and use cases for Engineered Systems other than Oracle Exadata Database Machine, as shipments to date are heavily skewed toward the latter.


SimpliVity is a privately funded vendor of new-generation technology that started shipping products in 2013. The vendor is venture-funded, and has raised $101 million in funding so far. SimpliVity’s value proposition consists of a highly innovative data virtualization platform that abstracts data from its underlying hardware to provide greater data mobility, operational efficiency and total cost of ownership reduction, while eliminating the risk and expense associated with the traditional technology refresh life cycle. The vendor’s software platform is designed to run on any x86-based server; however, for the purposes of this Magic Quadrant, we have assessed SimpliVity based on its OmniCube hardware solution, a modular, server/storage complex based on x86 hardware and VMware hypervisor technology. SimpliVity’s aim is to prove that OmniCube is the best platform for running vSphere, plus other virtualized workloads upon which core data center applications run, including Hyper-V and KVM in the future. But in the short term, users of other virtualization technology must validate this potential. SimpliVity claims that over 350 systems have shipped, and further claims that 80% are already in full production.

As well as integrating server, storage and network switch technology — a common trait of all integrated systems — OmniCube goes further by incorporating capabilities such as global namespace for centralized management of geographically distributed storage, built-in VM backup, and global in-line data deduplication, compression and optimization at the source. By reducing input/output operations per second (IOPS) — given that only unique writes generate IOPS — SimpliVity aims to reduce required storage capacity (primary, backup and archive data) and WAN traffic. It is also possible to run an instance of the software on Amazon Web Services (AWS) as a low-cost target for backup/restore.

OmniCube modules come in three different capacities, ranging from an 8-core, 5TB module to a 24-core, 30TB module at the high end. These 2U modules can, in theory, be stacked in an infinite number to create what SimpliVity calls a federation. This allows the user to add resources very incrementally, with the additional proviso that modules can be deployed across multiple locations to aid business continuity. SimpliVity also enables customers to connect third-party servers running VMs to the OmniCube systems for added flexibility and generational interoperability. SimpliVity’s partner network now exceeds 150 partners in over 15 countries, with over 60% of sales in North America and about 30% in EMEA, plus emerging sales via key partners in other regions.


  • OmniCube has a highly innovative design that incorporates in-line deduplication and data compression at origin, as well as global namespace and native VM backup.
  • The modularity of OmniCube theoretically enables very high scaling in small server/storage increments.
  • Wide-area support facilitates business continuity strategies, as well as unified management of remote sites.
  • Simplivity’s management tools connect to existing management frameworks via vCenter/Openstack standard APIs.
  • Gartner client inquiries demonstrate very positive client feedback.


  • SimpliVity is a nascent vendor that is dependent on venture funding.
  • International clients should validate the presales and postsales capabilities of local channel partners, and insist on talking with reference clients in the region (especially clients outside EMEA and North America).
  • The tight integration of compute and storage makes it particularly important for users to create collaboration between different administration teams.
  • As shipments of OmniCube only started in early 2013, the installed base is still very small, with relatively few reference accounts.


Teradata is a leader in the enterprise data warehouse (EDW) market. However, for this Magic Quadrant, we are assessing only the integrated infrastructure capabilities of the Teradata platforms, not the integrated stack components, such as the Teradata Database and other business intelligence/data warehouse analytics software. The Teradata solution is based on x86-based technology primarily running SUSE Linux, although its installed base has a legacy Unix MP/RAS version that is gradually declining as the installed base converts to Linux. Teradata systems use an MPP topology (using virtual AMPS) with small two-way nodes connected via a high-performance, proprietary fiber switch called a Bynet. Although Teradata’s solution traditionally was more expensive than its competitors’ solutions, the price has been reduced, and it does offer the ability to mix new generations of hardware with older generations, thus protecting the customer’s investment.

Teradata has always sold workload-specific platforms, and the range has expanded gradually to fit many data warehousing, data discovery and data staging needs. The common Teradata DBMS is deployed across the Active Enterprise Data Warehouse, the Data Warehouse Appliance, the Data Mart Appliance and the Extreme Data Appliance. For multistructured data discovery and data staging, Teradata has launched the Teradata Aster Big Analytics Appliance and Teradata Appliance for Hadoop. Because Teradata controls the stack, each platform is shipped ready to run a distinct workload.


  • Teradata offers a portfolio of proven integrated systems, which started as an EDW solution, but is now joined by a growing range of stacks for analytics.
  • Teradata’s solutions are built on value-added software and switch capabilities on other commodity components.
  • The vendor has a very loyal installed base and community.
  • Strong sales and service are associated with Teradata’s stack and infrastructure.


  • Teradata is still perceived as primarily a high-end solution and regarded as relatively expensive.
  • Teradata addresses only the markets for EDW/analytics, and has limited focus on opportunities for transactional workloads.
  • Teradata users are being offered more choices and alternate architectures or solutions through the introduction of competitive in-memory databases and disk caching technologies.
  • Open-source software (OSS) alternatives are raising the bar and diluting the market for specialist workload vendors.


Unisys has had a long heritage in the mainframe market with ClearPath systems and services. While some would consider the mainframe as the ultimate integrated system, its proprietary software, hardware, application-specific integrated circuits (ASICs) and fabric result in a market disadvantage to the latest generation of integrated and converged fabric-based systems based on lower-cost compute, virtualization, storage, management software and applications. On the other hand, these mainframes were originally built to run the most mission-critical applications and are security-hardened to near invulnerability. Unisys is building on its mainframe credentials to participate in the broader integrated system blade market.

The vendor is leveraging foundational architectural attributes from its strong mainframe heritage, and reconstituting the design into a modular, open approach while retaining mainframe attributes. The strategy is called Forward, and its launch and go-to-market activity is in the earliest stages. Unisys brings its experience in airline reservations, banking, emergency service and government systems to a broader audience, who may not previously have been amenable to x86 servers running Linux and Windows for mission-critical applications. Forward is architected to enable multitenant partitions or standard virtualization, with mainframe- and Unix-level robustness, high-speed fabric interconnect, advanced grade security, and scale-up symmetric multiprocessing, as well as scale-out Hadoop processing with automation and orchestration management software.

With only barely a calendar year quarter of experience in shipping its re-engineered new systems, Unisys has little track record to prove its credibility in this nascent market. The early interest will be from existing mainframe users, those seeking rugged and reliable systems, and Unix shops exploring migration opportunities. Unisys’s top-down core applications appeal will likely complement the bottom-up interest that IT shops with large stakes in x86 servers through e-commerce, mobility and modernized business applications represent. It’s potentially possible that Unisys could talk to the same IT shop from a different perspective, even if the shop has integrated systems from other vendors. Unisys intends to broaden integrated systems to include integrated workload systems and more tightly integrated systems with existing storage partners (such as EMC and NetApp) this calendar year.


  • Unisys is able to leverage attributes more commonly associated with high mainframe security, isolation and reliability.
  • Unisys has experience in application and industry sectors with stringent SLA requirements.
  • The Unisys isolated partition capability segregates users, data and applications to ensure compliance, but still enables users to benefit from consolidation.
  • Combined scale-up and scale-out can be achieved in a single consolidated system.


  • The vendor lacks an experienced and verifiable installed base.
  • The Unisys ISV support program is at an early stage of certification.
  • With a young product, Unisys still has unproven automation and orchestration.
  • There is a lack of user data regarding time to build, deliver and productize.


VCE, a private company, was initially formed in 2009 by Cisco, EMC and VMware, and began shipping Vblock systems in 3Q10. The vendor has dispelled many of the earlier market doubts, and revenue in 2013 is now estimated to extend beyond $1 billion. The vendor’s momentum is due, in large part, to growing adoption in large global organizations and among large service providers, reinforced by strong partners in Cisco and EMC that helped VCE compete against formidable competition. Its integrated best-of-breed components (i.e., Cisco UCS server blades and networking, and EMC storage combined with VMware virtualization) have been a major contributing factor to VCE’s success. VCE has also applied its own engineering resources to create an integrated compatibility and test matrix to the patch and upgrade process in a single-source cooperative support agreement, including validated certification with its partners. The vendor continues to broaden its market opportunities by expanding ISV relationships, Specialized Systems (specifically optimized and tuned such as SAP Hana, VDI, Oracle DBMS), low-entry-priced configurations for remote and distributed sites, and its VCE Vision Intelligent Operations life cycle management software.

We do not believe that the alliance of VMware, EMC and Cisco is at imminent risk; however, as in any alliance, fragmentation can always occur. For example, VSPEX is an EMC channel partner program designed to promote integrated system reference architectures, EMC storage and other server blades in addition to Cisco’s as solutions by third parties that can compete; Cisco is implementing storage-attached configurations from its recent acquisition of Whiptail; Cisco offers the FlexPod solution with NetApp storage that competes against EMC and is not part of the VCE reference build; and Cisco has an early relationship with Hitachi that will grow to compete. The cloud strategies of VMware and Cisco potentially create confusion for VCE adopters (such as VMware vCloud and Cisco Intelligent Automation for Cloud). VCE is positioning itself to offer the APIs for alternative higher stacks, but does not subsume these capabilities in VCE Vision Intelligent Operations.

VCE will integrate with and support Cisco’s Application Centric Infrastructure (ACI) software-defined networking. VCE will not integrate VMware’s software-defined networking solution, NSX, but NSX can run on Vblock with support from VMware. And, of course, there are IT preferences for component flexibility that will apply pressure on VCE to make judgments on implementing greater component flexibility and inclusiveness at the trade-off between greater complexity and engineering costs.


  • VCE has had proven success across major global enterprises in banking, retail, healthcare and manufacturing.
  • It has a single-source cooperative support model with Cisco, EMC and VMware.
  • VCE Vision Intelligent Operations management complements Cisco UCS and VMware management.
  • Close alignment to EMC enables new business opportunities from EMC-led storage clients.
  • VCE benefits from Cisco’s and VMware’s broadly deployed incumbent footprints.


  • There are potential conflicts among the partners’ competitive interests.
  • Competition exists from channel programs and reference architectures (such as Cisco-NetApp FlexPod and EMC VSPEX).
  • Users who favor the more holistic single-vendor solution from vendors like Oracle, IBM and HP will be more likely to question a partner-dependent business model.
  • VCE is in the early phases of channel and geographic expansion.

Vendors Added and Dropped

We review and adjust our inclusion criteria for Magic Quadrants and MarketScopes as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant or MarketScope may change over time. A vendor’s appearance in a Magic Quadrant or MarketScope one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.


None; this Magic Quadrant is in its first release.


None; this Magic Quadrant is in its first release.

Inclusion and Exclusion Criteria

There are many variations of integrated systems available, many of which are impossible to assess fairly and equally using the methodology of a Magic Quadrant. Many vendors are able to deliver an integrated system "experience," but through loose collaboration that creates a form of reference architecture or certified design. Bull, for instance, works with EMC, NetApp and IBM to create various integrated system designs; however, the solution failed the criteria for inclusion in this research.

We have defined the following eligibility criteria for inclusion in this Magic Quadrant:

  • Integrated systems must have servers, storage, network and a management software layer associated with them. Software-only integrated systems do not qualify at this time, as the customer or the integrator would have to layer the software on top of third-party hardware, and integrate and support the offering.
  • Integrated systems that fall into both the integrated infrastructure/stack and reference architecture categories are eligible, if they meet other required inclusion criteria. Each integrated system that leverages a reference architecture is assessed for inclusion based on its individual merits. Only reference architectures that are mutually inclusive between the partners would be eligible.
  • Reference architectures are a good way for two vendors to partner so that each other’s weaknesses can be addressed to create an integrated system experience. However, if those reference architectures are expanded to multiple vendors, then those weaknesses are not consistent between all the partnerships, and the technology is not eligible for this Magic Quadrant.
  • An eligible integrated system must be based on an agreed-on short menu of server, storage and network elements; this could be as low as one per element (in fact, one is preferred), and would never be greater than a handful of options. End-user clients must be able to select based on a strong degree of predictability.
  • While an integrated system must have compute, storage, network and management functionality on board, we recognize that many — even most — offerings will involve a degree of vendor collaboration. Some vendors will also not have an integrated networking switch in hardware, but will deliver some or all of the functionality in the virtualization software layer.
  • If the end user has to do the integration, the technology is not an integrated system that is valid for inclusion; however, it may be an eligible reference architecture that delivers an integrated system experience. This, again, eliminates software-only solutions, because the customers have to configure their own hardware. The value proposition of an integrated system should remove the need for racking and stacking from the customers’ hands.
  • A system that ships with included just a bunch of disks (JBOD) storage will not be an eligible integrated system, unless the vendor delivers integrated management capabilities for the storage and related processes (such as backup and recovery of workloads).
  • The support aspect is considered crucial. We believe that support Level 1 (call center/service desk) and Level 2 (escalation) must be integrated to facilitate quick and easy problem resolution. However, Level 3 (engineering) support can still be delivered separately for the individual components of integrated systems based on vendor partnerships.
  • Finally, we stipulate proven vendor collaboration regarding engineering, laboratory coordination, certifications, qualifications, testing, etc.

Evaluation Criteria

Ability to Execute

The market for integrated systems is complex, with greater dependency on very specific topics. We have, therefore, added several subcriteria to the standard list of criteria, to enable more accurate vendor assessment. For our assessment of Product/Service, we examine the degree of software integration available from the vendor or implementation partners, plus the vendor’s ability to deliver on road map promises. Sales Execution/Pricing examines both direct and indirect execution, as most integration system strategies are highly dependent on the role of local channel partners.

Table 1. Ability to Execute Evaluation Criteria
Evaluation Criteria Weighting
Product/Service High
Overall Viability High
Sales Execution/Pricing Medium
Market Responsiveness/Record Medium
Marketing Execution Medium
Customer Experience Medium
Operations Low

Source: Gartner (June 2014)

Completeness of Vision

The market for integrated systems is complex, with greater dependency on very specific topics. We have, therefore, added several subcriteria to the standard list of criteria, to enable more accurate vendor assessment. As with Sales Execution/Pricing, the Sales Strategy criterion for Completeness of Vision assesses both the direct strategy and the channel partner strategy. Offering (Product) Strategy focuses on the breadth of the total solution (including software integration), the investment in management tools and the technology portfolio breadth. Business Model examines the implementation services that are available through the vendor or channel partners, and the variety of solutions and use cases that can be addressed.

Table 2. Completeness of Vision Evaluation Criteria
Evaluation Criteria Weighting
Market Understanding High
Marketing Strategy Low
Sales Strategy Low
Offering (Product) Strategy Medium
Business Model Medium
Vertical/Industry Strategy Low
Innovation High
Geographic Strategy Low

Source: Gartner (June 2014)

Quadrant Descriptions


Although the storage, compute and network technologies that comprise each vendor’s integrated system are typically well-proven products in their own right, the integration of these technologies is a young market. As a result, the market is ripe for future evolution (and the potential emergence of new Leaders). Multiple Gartner client inquiries and surveys demonstrate that Cisco-based derivatives are the most prevalent technologies being shortlisted, and VCE takes the stronger position based on the active promotion of the Cisco, VMware and EMC alliance, and the strong factory integration capabilities that help to differentiate the Vblock product. However, the Cisco-NetApp FlexPod solution has also gained much ground recently, helped by the work that both vendors have done to prove FlexPod as a true integrated system, rather than a pure reference architecture. It is easy to overlook Oracle as an integrated system vendor. Its technology is based on rack-optimized servers rather than blades, and (with the exception of the Oracle Virtual Compute Appliance) all Oracle’s products are targeted at specific software stacks or highly prescribed use cases.


Teradata is the only Challenger, but the vendor is probably the most experienced integrated system vendor in the market. It is easy to overlook Teradata’s products, as the "integrated systems" term has only been in use for a few years. But Teradata (and NCR, before Teradata became independent) has been building tightly integrated systems for a generation. Like Oracle, Teradata concentrates on tight software stack integration, and focuses its products only on markets related to data warehousing, business intelligence and analytics.


The integrated system market is founded on innovation, and attracts numerous startups that see the potential to penetrate a data center market normally dominated by the established vendors. Consequently, the Visionaries quadrant has five vendors, which we believe are pushing the boundaries of technology, but have yet to achieve enough market penetration to become Leaders. IBM’s PureSystems family is a broad range of products that encompasses multiple processors, operating systems and software stack integration (both from IBM and third parties). HP also has a very broad portfolio of integrated systems, with the greatest innovation delivered by the radical Moonshot design, and the newly introduced ConvergedSystem 900 for SAP Hana. Dell’s portfolio is broad, and has been recently boosted by the launch of VRTX (an integrated system well-suited to branch or departmental workloads). Finally, Nutanix and SimpliVity are both recent startups that are still venture-capitalist-owned. Their technologies deliver much greater bonds between compute and storage components, and each vendor focuses on the management software stack as a chief differentiator. Nutanix is more mature and has a much larger portfolio of clients; hence the stronger Visionary quadrant position. But SimpliVity has made great progress during 2013, and offers an even more radical integration story that extends to deduplication and negates the need for WAN optimization.

Niche Players

Hitachi delivers functionality in its designs that is highly differentiated from most peers, and has a strong management suite. Lack of global awareness is the biggest corporate challenge, but we expect Hitachi will build on its storage market recognition to counter this. Fujitsu is often overlooked as an integrated system vendor, again because of relatively low recognition in the North American market. But, like Teradata, Fujitsu has been shipping integrated systems for many years in the form of its FlexFrame appliance for SAP or Oracle workloads. Huawei holds a strong niche presence with its FusionCube offering, which is gaining in market recognition. While most market success will come from emerging geographies and Asia/Pacific region markets, Huawei’s technology is highly innovative. Finally, Unisys is a very new entrant to the integrated system market, and only started shipping its Forward product in late 2013. Unisys can make progress by exploiting its strong vertical market recognition and technology expertise.


This research is intended to help select the vendor approach that is most suited to an organization’s integrated system needs.

Most traditional system vendors offer some type of preintegrated system comprising a server, a network and storage. Through appropriate planning, configuration analysis and consultation with the vendor, the user is promised delivery of a self-sustaining and supported system that requires little ongoing maintenance and operations management by IT. The value proposition most attractive to these organizations is the offloaded operations management and optimization of all the moving parts that make up the system as a service delivery platform. Many IT leaders intend these systems to be their foundation for cloud services, yet major cloud providers do not use integrated systems. They want to focus on automated management of the resources presented by the system to enable quick and agile responses to enterprise business needs.

Even though many vendors have entered the market, no two have the same equipment, software and services for the variety of solutions and workloads that IT wants to deliver. Therefore, while the integrated approach offers high potential returns, they are not cumulative across vendors. In other words, each vendor integrates within its own silo of technologies, and integrating across them, as if they were a best-of-breed choice, is a challenge, if possible at all.

Finally, the technology choice should not be taken within the IT organization alone; additional stakeholders need to be part of the decision process around integrated system acquisition. As data center technology becomes increasingly modular, IT organizations, procurement departments, lines of business and asset control departments must reassess life cycle and depreciation planning approaches, as well as purchasing policies, to account for the extended lifespans of integrated systems.

In using this research, IT organizations should take the following recommendations and guidelines into account:

  • Treat integrated systems and converged infrastructure as a data center modernization and transformation project, rather than a refresh or tactical upgrade strategy.
  • Validate whether convergence can be applied usefully toward your goals, infrastructure and experience levels, and include proofs of concept.
  • Plan a differential analysis on converged versus best-of-breed; legacy retention or legacy disposal; human resource retention, transference or reduction; and capital expenditure/operating expenditure (capex/opex) cost comparisons (including retooling).
  • Decide on the vendors and partners, as appropriate, through a detailed requirements checklist, maintenance support matrix, capacity planning and operational impact assessment.
  • Ask vendors to provide road maps equal to the life expectancy of the most durable integrated system components, and a "road map of road maps" for the entire asset.
  • Don’t invest in any modular infrastructure without a clear view of which technologies can and can’t be upgraded during their operational lifetime, and calculate capacity planning accordingly.
  • Restructure asset depreciation cycles to reflect the extended life span of integrated system assets.
  • Prepare to overhaul software licensing rules, discount calculations and ROI expectations for integrated systems- especially those that have embedded software stacks.

Market Overview

Based on 2013 revenue rates, we estimate the overall market for integrated systems will exceed $6 billion in 2014, with a growth rate of 50% over the prior year. This still represents a low percentage of the $80 billion total hardware market, but its continual growth rates will pose a challenge to the vendors to maximize share of wallet and margins with a compelling value proposition going forward in the future. The forces for acceleration are generally overcoming the inhibitors in discussions with IT decision makers. Among the drivers are:

  • Improved performance
  • Perceived lower operating expenditure costs and greater IT optimization
  • Increased automation
  • Simplified sourcing and support
  • Faster time to value with infrastructure
  • Support in moving from IT maintenance to IT innovation

Opposing forces are perceived premium pricing, desire to self-integrate using internal skills, greater perceived choice and less vendor lock-in. Integrated systems can satisfy user requirements in several different ways by addressing:

  • Integrated applications — around workload optimization and performance
  • Integrated infrastructure — for increased operational efficiencies, automation, simplified sourcing and support
  • Integrated reference architectures — using channel partners specific to industry and application needs, with the option of mixed vendor hardware components delivering a similar value proposition, as detailed above.