Gartner Critical Capabilities for General-Purpose, Midrange Storage Arrays – 20 November 2014


Analyst(s): Stanley Zaffos, Valdis Filks, Arun Chandrasekaran


I&O leaders and storage architects can improve their storage infrastructures’ agility and reduce costs by mapping application needs to storage array capabilities. This research quantifies eight critical measures of product attractiveness across six high-impact, midrange storage array use cases.



Key Findings

  • Traditional dual-controller architectures will continue to dominate the midrange storage market during the next three to five years, even as new scale-up, scale-out, flash and hybrid storage arrays compete for market share.
  • Server virtualization, desktop virtualization, big data analytics and cloud storage are reprioritizing the traditional metrics of product attractiveness.
  • The compression of product differentiation among various vendor offerings and the availability of easy-to-use migration tools are diminishing the strength of vendor lock-ins.
  • Security and concerns with migration and conversion costs among competing storage vendors’ arrays are declining in importance relative to vendor reputation, support capabilities, performance, reliability and scalability.


  • Take a top-down design approach to infrastructure design that identifies high-impact workloads, conducts workload profiling, sets service-level objectives, quantifies future growth rates, and examines the impact on contracts with storage service and disaster recovery providers.
  • Focus on externally visible measures of product attractiveness, such as input/output operations per second, throughput and response times, rather than configuration differences in cache, solid-state drives or hard-disk drive geometries, when choosing a storage solution.
  • Build a cross-functional team that includes users, developers, operations, finance, legal and senior management to provide greater insight into planned application deployments and changes in business needs, and to unmask any stakeholders’ hidden agendas, such as an unwillingness to give up budget or control over the arrays that support their workloads.
  • Conduct a what-if analysis to determine how changes in organizational data growth rates and in planned service lives affects the attractiveness of various shortlisted solutions.

What You Need to Know

With spending on storage growing faster than IT budgets, overdelivering against application needs with respect to availability, performance and data protection is a luxury that most IT organizations can no longer afford. The ability to build agile, manageable and cost-effective storage infrastructures will depend on the creation of methodologies that stack-rank vendors, storage arrays and bids in their environments.

Few "bad" storage arrays are being sold, and none of the 16 arrays we have selected for inclusion in this research are in that category. The differences among the arrays ranked at the top of the use-case charts and the arrays at the bottom are small, and, to a significant extent, they reflect differences in design points and ecosystem support. Hence, array differentiation is minimal, and the real challenge of performing a successful storage infrastructure upgrade is not designing an infrastructure upgrade that works, but designing an upgrade that optimizes agility and service-level objectives (SLOs), and minimizes total cost of ownership (TCO).

Users that don’t need the scalability and availability of high-end architectures or missing ecosystem support of the lower-ranked arrays evaluated here are encouraged to consider them, because they may have benefits in your environment and be more aggressively priced. Although optimization adds a layer of complexity to the design of the storage infrastructure upgrade, users should be aware that choosing a suboptimal solution is likely to have only a moderate impact on deployment and ownership costs for the following reasons:

  • Product advantages are usually temporary in nature — Gartner refers to this phenomenon as the "compression of product differentiation."
  • Most clients report that differences in management and monitoring tools, as well as ecosystem support between various vendors’ offerings, are not enough to change staffing requirements or SLOs.
  • Storage ownership costs, while growing as a percentage of the total IT spending, still account for less than 10% (6.5% in 2013) of most IT budgets.

Nonproduct considerations, such as vendor relationships, presales and postsales support capabilities (e.g., training, past experience and pricing), that are not strictly critical capabilities should be significant considerations in choosing solutions for the high-impact use cases explored in this research. More specifically, this includes consolidation, online transaction processing (OLTP), server virtualization and virtual desktop infrastructure (VDI), business analytics and the cloud. (For more information about the vendors covered in this research, see "Hype Cycle for Customer Analytic Applications, 2014.")



Much of the storage array space has been dividing into two general-purpose markets:

  • Hybrid array
  • Solid-state array (SSA)

Gartner appreciates the entrenched usage and appeal of simple labels and will, therefore, continue to use the terms "midrange" and "high end" until the marketplace obsoletes their usage, even though they may no longer be the most accurate descriptions of array capabilities. As a practical matter, Gartner has chosen to publish separate midrange and high-end Critical Capabilities research to enable us to provide analyses of more hybrid arrays in a potentially more client-friendly format.

Critical Capabilities Use-Case Graphics

Figure 1. Vendors’ Product Scores for the Overall Use Case

Source: Gartner (November 2014)

Figure 2. Vendors’ Product Scores for the Consolidation Use Case

Source: Gartner (November 2014)

Figure 3. Vendors’ Product Scores for the OLTP Use Case

Source: Gartner (November 2014)

Figure 4. Vendors’ Product Scores for the Server Virtualization and VDI Use Case

Source: Gartner (November 2014)

Figure 5. Vendors’ Product Scores for the Analytics Use Case

Source: Gartner (November 2014)

Figure 6. Vendors’ Product Scores for the Cloud Use Case

Source: Gartner (November 2014)


Dell Compellent

Dell’s Compellent midrange storage arrays are the vendor’s solution of choice for larger customer deployments. The SC8000, the largest array in the Compellent series, is performance- and functionally competitive. It can be integrated with the FS8600 network-attached storage (NAS) appliance to create a unified block-and-file storage system. Compellent array highlights include ease of use, excellent reporting and the ability to keep connections active, even in the presence of a controller failure that reduces its exposure mismatches between path failover and load-balancing software.

Autotiering (aka data progression) has been enhanced to move logical unit number (LUN) pages in near real time to provide a more consistent performance experience across varying workloads with the May 2014 release of Storage Center Array Software 6.5, which is available as a no-charge upgrade for customers under a current support agreement. Compellent arrays can now be configured with separate read-and-write optimized caches and has extended its autotiering feature reach to include Fluid Cache SAN (server-side cache) to further improve performance/­throughput and usable scalability.

Dell also offers specialized "Copilot" support services to reduce service calls, while improving storage management and utilization, as well as customer satisfaction. Compellent’s Perpetual Licensing software-pricing model enables customers to "grandfather" software one-time charges (OTCs), thereby lowering acquisition costs when upgrading the arrays. Although Dell can deliver block-and-file storage capabilities, a number of its established competitors are delivering more-seamless unified or multiprotocol (block and file) solutions.

Dot Hill AssuredSAN 4000/Pro 5000

Dot Hill’s AssuredSAN and AssuredSAN Pro series share a common technology base; service the entry into the middle segment of the midrange storage array market; and deliver competitive performance with software features such as thin provisioning, autotiering and remote replication. Both arrays’ reliability and microcode quality have benefited from Dot Hill’s OEM agreements with companies such as HP, Teradata and Quantum, which have sold its products under their brand names. The RealStor autotiering feature moves LUN pages in real time, using algorithms that limit overhead, while keeping the array responsive to changes in workload characteristics.

AssuredSAN has extremely competitive pricing and high customer satisfaction levels for products in its range, and its software licensing extends to the entire array — that is, it is priced by model, rather than capacity-based. Management ease of use continues to improve as the system become more autonomic and better instrumented; however, these improvements are not enough to make it a competitive advantage. Dot Hill’s efforts to build a strong technology partner ecosystem have been hampered by its limited size and R&D resources, which make supporting new APIs under its own logo a challenge.

EMC VNX Series

The latest generation of the VNX storage arrays, launched in September 2013, incorporated a hardware refresh, as well as a firmware update that improved multitasking to exploit the multicore processors within the controllers, improve performance and reduce the overhead of value-added features. This enabled the VNX to scale performance of the front-end controllers, and to fully exploit back-end solid-state drives (SSDs) and hard-disk drives (HDDs). Virtualization is not available within the VNX models; however, it is provided via VPLEX, EMC’s network-based virtualization appliance. The VNX benefits from a large ecosystem and tight integration with VMware and RecoverPoint, which provides network-based local (concurrent local copy) and remote replication (continuous remote replication).

The Unisphere management graphical user interface (GUI) for new users is still not as modern or as easy to use as those from newer array designs; however, the differences are small, once the learning curve has been scaled. Gartner client feedback verifies that the new VNX system performs well and is a significant improvement over the previous generation. However, with the ubiquitous use of SSDs in storage arrays and the ability of many new startups to create 100,000-plus input/output operations per second (IOPS) arrays, performance in the general marketplace is no longer a key differentiator in its own right, but a scalability enabler. Customer satisfaction with EMC sales and support is above average.

Fujitsu Eternus DX500 S3/DX600 S3

The Eternus DX200 S3 through DX600 S3 series are performance- and feature-competitive storage arrays. All members of the DX series use the same software, licensing and administrative GUIs and can replicate among different members of the series and with earlier DX series arrays. This use of common software across models makes upgrades among models simple and enables flexible disaster recovery deployments. Additional highlights include snapshots; thin provisioning; autotiering; multiprotocol support and quality of service (QoS); tight integration with VMware, Hyper V and backup/restore solutions, such as Symantec and CommVault; reference designs; high availability; and easy-to-manage infrastructure as a service (IaaS) environments.

Performance numbers are publicly available and independently reviewed, which adds credence to Fujitsu’s performance claims. All the technical aspects, functions and features of this storage array series are higher than average, manageability is good, and reliability is exceptionally good. In the near term, Fujitsu is developing primary data reduction and its capabilities to integrate with Openstack, but will require three to six months after it goes to general availability before it is market-validated. The company is also developing the ability to provide a cloud gateway or interface with cloud APIs.

HDS HUS 100 Series

The Hitachi Data Systems (HDS) Hitachi Unified Storage (HUS) 100 series is a unified storage array that supports block, file and object capabilities, and it is renowned for its solid hardware engineering. HUS has a symmetric, active/active controller architecture, thus enabling LUN access through either controller, with equal performance for block-access applications. In addition, the array will maintain all active host connections through the operating (surviving) controller in case of a failure. Due to the block-and-file services being provided by physically separate components (albeit tied together via a unified management GUI), a consistent snapshot cannot be created between separate block-and-file storage resources.

HUS also supports reliable, nondisruptive microcode updates that can be done at a microprocessor core level. Among the recently introduced features are the ability to spin down/spin up redundant array of independent disks (RAID) groups based on input/output (I/O) traffic, controller-based encryption and the ability to migrate file-based data to the Amazon Web Services (AWS) cloud (which requires an additional license). Although Hitachi Command Suite offers unified administration of various Hitachi storage arrays, it needs to improve its ease of use and its support for older arrays. More specifically, it needs to provide tighter integration with HUS 100 for block, file and object storage management features. Lack of tighter integration with Microsoft Hyper-V through ODX support and the absence of support for kernel-based virtual machines (KVMs) are limiting broader adoption in the midmarket.

HP 3PAR StoreServ

The HP 3PAR StoreServ series is the centerpiece of HP’s disk storage strategy, providing a common management and software architecture across the entire product line. The 3PAR architecture now extends from the entry-level, two-node 7200 to the four-node 7400 to the StoreServ 10000 series, providing midrange users with simple-to-manage and seamless growth up to 1.2PB. Ongoing hardware and software enhancements are keeping the system competitive with other SAN storage systems in availability, scalability, performance, functionality, ecosystem support and ease of use. New capabilities that have been recently released include priority optimization software, which supports the setting of performance goals at a volume level, and the six nines guarantee for four-node configurations for customers with mission-critical service contracts.

The 3PAR 74×0 systems, configured with four or more nodes, have an inherent advantage in usable availability relative to dual-controller architectures, and this advantage has been aided by recent functional enhancements, such as persistent cache and persistent ports. Performance and throughput scale linearly as nodes are added to the system and the fine-grained thin provisioning (16KB chunks) enables users to take full advantage of SSDs and aggressively overcommit storage resources.

Offsetting these strengths is a lack of an integrated NAS capability, as well as a lack of data compression and deduplication. The 3PAR systems are not yet delivering the same RPOs over asynchronous distances as traditional high-end storage systems, because 3PAR asynchronous remote copy still transmits the difference between snaps.

Huawei OceanStor 5000/6000 Series

The Huawei OceanStor 5000/6000 series are scale-out storage systems that can scale up to eight controllers and natively support block and NAS protocols, without the use of an NAS gateway. Scaling to four controllers improves performance/throughput, as well as usable availability by shrinking the relative impact of a controller failure on system performance/­throughput from a nominal 50% to a nominal 25% of normal performance. There are no signs of corner cutting on the printed circuit boards (PCBs), chassis and support equipment. Packaging and cabling layout show attention to detail and serviceability. Microcode updates, repair activities and capacity expansions are nondisruptive. Transparency and openness are provided via Storage Performance Council (SPC) benchmarks, which are used to position the OceanStor 5000/6000 series against its competitors.

A checklist of storage efficiency and data protection features includes clones, snapshots, thin provisioning, autotiering, and synchronous and asynchronous remote copy. To improve the usability of asynchronous remote copy, the OceanStor series includes consistency group support. A similar checklist of supported software includes Windows, VMware, Hyper-V, KVM and various Linux implementations, including Red Hat and SUSE. Offsetting these strengths is the relative lack of integration with many backup/restore solutions, and management tools that are improving, but are not yet a competitive advantage, as well as a limited pool of experienced Huawei storage administrators.

IBM Storwize V7000

The IBM Storwize V7000 series is a unified storage array that incorporates technologies from many IBM products, including the System Storage SAN Volume Controller (SVC), General Parallel File System (GPFS) and XIV GUI design. This reuse of technologies provides interoperability with installed SVCs; a reduction in the V7000 learning curve for existing IBM customers; mature thin provisioning, autotiering, snapshot and replication features; storage virtualization capabilities; and a good GUI experience. Physical scalability has been increased to 1,056 disks.

The ability to virtualize third-party storage arrays and replicate to other V7000s or SVCs is a high-value differentiator that is particularly useful in rationalizing existing storage infrastructures and facilitating storage infrastructures refreshes. Customers seeking to improve their physical infrastructure agility can purchase V7000 software and install it in a virtual host, thus creating a storage node with all the features of a V7000 node. Offsetting these strengths is the inability of a logical volume to span node pairs, which adds message and data "forwarding" overhead when a LUN is accessed from a nonowning node pair, as well as limited integration between the NAS gateway built on the GPFS and back-end V7000 block storage, which adds management complexity.

NEC M-Series

Although well-known in its home market of Japan as a storage vendor, NEC actively began to market its midrange storage products overseas only during the past few years. The M-Series comes in four models (M110, M310, M510 and M710) that support SAS, Fibre Channel and Internet Small Computer System Interface (iSCSI) protocols. The product has simple, all-inclusive software pricing, and includes low-power hardware components to reduce power consumption. The product has high reliability and comprehensive data services, such as autotiering, thin provisioning, snapshots and replication. It has integration with VMware vSphere API for Array Integration (VAAI) and provides vSphere API for Storage Awareness (VASA) support. Customers have indicated that the manageability needs to be improved.

The product has QoS features for minimum/maximum I/O to protect critical application performance in multitenant environments. Autotiering can make tiering decisions on a daily basis only, rather than in real time. The M-Series doesn’t support data reduction technologies, such as compression or deduplication, and the array supports only block protocols and doesn’t offer unified storage capabilities. Customers that require NAS capabilities should use the NV Series as a gateway, which is available only in Japan.

NetApp E-Series

The E-Series is an entry and midrange block storage system that has been market-validated through OEM relationships and branded sales. Its architecture provides balanced performance (IOPS) and throughput (GB/sec) which makes it suitable for use with workloads that stream large amounts of sequential data, such as high-performance computing (HPC), big data, surveillance and video-streaming applications, as well as IOPS-centric workloads, such as database, email and mixed virtualized workloads. The relatively recent additions of SSD managed as second-level cache, thin provisioning and data-at-rest encryption, coupled with aggressive pricing relative to other NetApp offerings, has increased the E-series appeal in supporting general-purpose workloads.

NetApp FAS8020/8040

The NetApp FAS 8020/8040 series is the latest iteration of the Data Ontap OS-based FAS series. The V series is no longer offered as a separate series, but its functionality is now integrated in all FAS models and enabled by the FAS FlexArray software feature, which provides useful, heterogeneous array migration and administration features in one system. The steady pace of Clustered Data Ontap enhancements is eliminating the feature deficiencies that existed between seven-mode and cluster or c-mode in earlier versions of Data Ontap. Improvements are upgrading performance, scalability and usable availability by decreasing hybrid pool disk rebuild times, as well as increasing maximum scale-out configurations. The FAS8020 series can scale out to 34PB for NAS and 96TB for SAN, and the FAS8040 can scale out to 51PB for NAS and 192TB for SAN.

With the release of Clustered Data Ontap 8.2, SnapVault, comprehensive support for VMware APIs, and SMB 3.0 and Offloaded Data Transfer (ODX) from Windows Server 2012 are supported. As the appeal of c-mode continues to improve relative to 7-mode, difficult conversions remain a significant obstacle for users to overcome. Clustered Data Ontap’s lack of a distributed file system limits the maximum performance of any file system to that of a single high-availability node pair. With the release of Clustered Data Ontap 8.3, MetroCluster is now supported, but SnapLock remains a future capability.

Nimble Storage CS-Series

Nimble customers rate three key differentiators as competitive advantages:

  • Proactive support via InfoSight — a cloud-based support offering
  • A relatively low purchase cost
  • A data layout designed to effectively leverage the different strengths of SSD and HDD

With the release of four-node cluster support, a 3Q14 technology refresh and microcode tweaks, Nimble has extended these value propositions up market. The InfoSight offering helps users optimize their configurations by suggesting configuration changes based on the real-time analysis of anonymous installed-base-wide data. Customer experiences have been positive, and Nimble still provides low-cost storage with relatively high performance by placing data on the correct storage media. Another key factor that is becoming more important is the value of community. Nimble has managed to create a successful information-sharing community of customers via NimbleConnect, which can be used to swap hints and tips.

This type of added value via transparency is quite rare, because it opens up positive and negative information sharing among users. Nimble is being adventurous and bold by taking these steps. If this can be successfully continued, and Nimble becomes a company renowned for trust and openness, then this will be a significant soft-product advantage that cannot be created or emulated overnight. Offsetting these strengths is the lack of NAS support and deduplication, which can further improve storage efficiency, as well as a limited, but growing, ecosystem.

Oracle Sun ZFS Storage Appliance

The ZS3-2 and ZS3-4 Storage Appliance provide all the features expected of a modern, unified storage array. High availability, performance/throughput, scale, tight integration with Oracle platforms and aggressive pricing are key ZFS appliance differentiators. Even though the ZS3 systems can provide block storage, Network File System (NFS) and Common Internet File System (CIFS) support, Oracle positions these arrays as NAS-based storage. Oracle Database customers gain extra performance and storage utilization benefits due to Oracle-specific protocol enhancements and the support of hybrid columnar compression, which is supported only when Oracle databases are attached to Oracle storage arrays.

The design of the system is less than 10 years old and is unconstrained by historic HDD-oriented design considerations. Due to its memory management design, which includes processor memory and separate read and write, new features can be added quickly. This is made apparent by the incorporation of detailed instrumentation, "double-bit error" checking and correction, performance, multicore scaling and capacity scaling from its original design, SSD exploitation, pooling, snapshots, compression, encryption and deduplication. These capabilities have been design objectives that were built in at its inception.

Tegile IntelliFlash

Tegile’s IntelliFlash hybrid, unified storage array is a scale-up architecture that supports block and NAS protocols, storage efficiency features, snapshots and remote replication, and the ease-of-use features expected of modern "clean sheet" designs. IntelliFlash arrays implement an active/active controller design to fully exploit controller performance and use hardware resources. This also makes array-wide, in-line compression and deduplication practical, which increases storage utilization and reduces the cost per TB. Application-aware provisioning templates improve staff productivity, while reducing the probability of misconfiguring the array.

SSDs are managed as second-level cache, which enables IntelliFlash arrays to respond quickly to changes in workloads, simplifies management and reduces the likelihood that incorrect policies will adversely affect overall array performance. Thin provisioning and separate read-and-write cache promise a more consistent performance experience by creating wide RAID stripes and matching cache capacity to application needs, respectively. Deep instrumentation and reporting tools simplify performance troubleshooting when system performance issues arise. To date, synchronous replication is unavailable; users must use asynchronous replication with IntelliFlash arrays.

Tintri VMstore

Tintri is a venture-backed startup that started shipping products in 2011. Tintri’s VMstore product is based on a dual-controller, active-passive architecture that is focused on delivering VM-aware storage. VMStore is a hybrid array that consists of SAS SSDs and 7,200-rpm SAS HDDs, where writes are compressed and deduped before being written and virtual machine (VM) I/O traffic is monitored to serve data as much possible from the SSD tier. Tintri is primarily focused on virtual workloads and allows administrators to provision VMs in a simpler manner, with added ability to set QoS at a VM level. Cloning, snapshots and asynchronous replication features also function at a VM level. The product has thin provisioning, deduplication and compression, all done in-line. Gartner inquiries reveal that the product is easy to set up and manage.

Tintri supports NFS only, and most deployments have been on the VMware platform, although it did make available support for KVM earlier this year. Support for SMB 3.0 and Hyper-V has been announced, but both are in public beta. Storage capacity per array is limited to a modest 78TB at this point, although in-line data reduction features can extend usable capacity. Although Tintri supports management of as many as 32 arrays from a single interface unit, it lacks the scale-out architecture required for easier capacity upgrades and balancing performance.

X-IO Technologies ISE Storage Systems

ISE 740 hybrid and ISE 240 storage systems are successors to the Hyper ISE and ISE. They are dual-controller arrays with the unique ability to repair most HDD failures in situ (i.e., in place). ISE SSDs are not repairable in situ, but field experience has shown this to be a nonissue, because each ISE is equipped with spare SSD capacity, and each SSD is built using eMLC flash and equipped with enough wear-leveling capacity to outlast any planned service life. ISE 240 arrays are configured with HDDs only, whereas ISE 740 is configured with a mix of SSDs and HDDs. The ability to take one of the disk’s platter surfaces offline, rather than an entire HDD, reduces rebuild times, insulates the user from field engineering mistakes and makes it practical to offer a standard, five-year warranty on both offerings.

Like their predecessors, the ISE 240 and ISE 740 are expected to earn a reputation for delivering consistent high availability and performance with minimal management attention, because they are essentially technical refreshes of their predecessors. Although they retain the core design, the newer controllers have increased processing power and are now capable of multiple protocols (both 8x8Gb/s FC and 4x40Gb/s iSCSI versions are available). This is largely attributable to the building-block approach taken by X-IO, which limits the maximum capacity of any ISE to no more than 40 SSDs and/or HDDs, and its internally developed Continuous Adaptive Data Placement (CADP) algorithm, which responds in near real time to changes in workload profiles by moving data between the SSD and HDD tiers. Both ISE models use the same management tools, have the same rack form factor (3Us) and are energy-efficient.

Offsetting these strengths is the ISE’s reliance on higher-level software — OS/hypervisor/database management system (DBMS). In addition, it lacks storage efficiency and data protection features, such as thin provisioning, snapshots and asynchronous replication. The ISE ecosystem is small, and is limited to VMware, Citrix, Hyper-V, HP-UX, AIX, Red Hat and SUSE Linux, Symantec Storage Foundation and Windows Server.


The arrays evaluated in this research include scale-up, scale-out and unified hybrid storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, and forecast growth rates and asset management strategies. Midrange arrays exhibiting scale-out characteristics can also satisfy high-end inclusion criteria when configured with four or more controllers and multiple disk shelves. Whether these differences in availability are enough to affect infrastructure design and operational procedures will vary by user environment. They will also be influenced by other considerations, such as downtime costs, lost opportunity costs and the maturity of the end-user change control procedures (e.g., hardware, software, procedures and scripting) that directly affect availability.

Product/Service Class Definition

Architectural Definitions

The following criteria classify storage array architectures by their externally visible characteristics, rather than by vendor claims or other nonproduct criteria.

Scale-Up Architectures

  • Front-end connectivity, internal bandwidth and back-end capacity scale independently of each other.
  • Logical volumes, files or objects are fragmented and spread across user-defined collections of disks, such as disk pools, disk groups or RAID sets.
  • Capacity, performance and throughput are limited by physical packaging constraints, such as the number of slots in a backplane and/or interconnect constraints.

Scale-Out Architectures

  • Capacity, performance, throughput and connectivity scale with the number of nodes in the system.
  • Logical volumes, files or objects are fragmented and spread across multiple storage nodes to protect against hardware failures and improve performance.
  • Scalability is limited by software and networking architectural constraints, not physical packaging or interconnect limitations.

Hybrid Architectures

  • Incorporate SSD, Flash, HDD, compression and/or deduplication into their basic design
  • Can be implemented as scale-up or scale-out arrays
  • Can support one or more protocols, such as block or file, and/or object protocols, including Fibre Channel, iSCSI, NFS, Server Message Block (SMB; aka CIFS), REST, FCoE and InfiniBand

Including compression and deduplication into the initial system design often results in both having a positive impact on system performance and throughput, with simplified management attributable, at least in part, to better instrumentation and more-intelligent cache management algorithms that are compression- and deduplication-aware.

Unified Architectures

  • Can simultaneously support multiple block, file, and/or object protocols, including Fibre Channel, iSCSI, NFS, SMB (aka CIFS), REST, FCoE and InfiniBand
  • May include gateway and integrated data flow implementations
  • Can be implemented as scale-up or scale-out arrays

Gateway-style implementations provision NAS and object storage protocols with storage area network (SAN)-attached block storage. Gateway implementations run separate NAS, object and SAN microcode loads on either virtualized or physical servers and, consequently, have different thin-provisioning, autotiering, snapshot and remote-copy features that are not interoperable among different protocols. By contrast, integrated implementations use the same thin-provisioning, autotiering, snapshot and remote-copy primitives independent of protocol, and can dynamically allocate controller cycles to protocols on an as-needed or prioritized basis.

Mapping the strengths and weaknesses of these different storage architectures to various use cases should begin with an overview of each architecture’s strengths and weaknesses, as well as an understanding of the workload requirements (see Table 1).

Table 1. Strengths and Weaknesses of the Storage Architectures
Strengths Weaknesses
  • Mature, reliable and cost-competitive architectures
  • Large ecosystems
  • Independently upgrade host connections and back-end capacity
  • May offer shorter recovery point objectives (RPOs) over asynchronous distances
  • Performance and internal bandwidth are fixed, and do not scale with capacity
  • Limited computing power may result in efficiency and data protection features usage negatively affecting performance
  • Electronics failures and microcode updates may be high-impact events
  • Forklift upgrade
  • IOPS and Gbps scale with capacity
  • Greater fault tolerance than scale-up architectures
  • Nondisruptive load balancing
  • High electronics costs relative to back-end storage costs
  • Efficient use of flash
  • Compression and deduplication are performance-neutral to positive
  • Consistent performance experience with minimal tuning
  • Excellent price/performance
  • Low environmental footprint
  • Relatively immature technology
  • Limited ecosystem and protocol support
  • Maximal deployment flexibility
  • Comprehensive storage-efficiency features
  • Performance may vary by protocol (block versus NAS)

Source: Gartner (November 2014)

Critical Capabilities Definition


This refers to the automation, management, monitoring, and reporting tools and programs supported by the platform. These tools and programs can include single-pane management consoles, as well as monitoring and reporting tools.

Such tools are designed to support personnel in seamlessly managing systems, monitoring system usage and efficiencies, and anticipating and correcting system alarms and fault conditions before or soon after they occur.


Reliability, availability and serviceability (RAS) is a design philosophy that consistently delivers high availability by building systems with reliable components, "derating" components to increase their mean time between failures, and designing system/clocking to tolerate marginal components.

RAS also supports hardware and microcode designs that minimize the number of critical failure modes in the system, serviceability features that enable nondisruptive microcode updates, diagnostics that minimize human errors when troubleshooting the system and nondisruptive repair activities. User-visible features can include tolerance of multiple disk and/or node failures, fault isolation techniques, built-in protection against data corruption and other techniques (such as snapshots and replication) to meet customer RPOs and recovery time objectives (RTOs).


This collective term is often used to describe IOPS, bandwidth (MB/sec) and response times (milliseconds per I/O) that are visible to attached servers. In well-designed systems, potential performance bottlenecks are encountered at the same time when supporting common workload profiles.

When comparing systems, users are reminded that performance is more of a scalability enabler than a differentiator in its own right.

Snapshot and Replication

These are data protection features that protect against data corruption problems caused by human and software errors, and technology and site failures, respectively. Snapshots can also address backup window issues and minimize the impact of backups on production workloads.


This refers to the ability of the storage system to grow capacity, as well as performance and host connectivity. The concept of usable scalability links capacity growth and system performance to SLAs and application needs.


This refers to the ability of the platform to integrate with and support third-party independent software vendor (ISV) applications, such as databases, backup/archiving products and management tools, as well as various hypervisor and desktop virtualization offerings.

Multitenancy and Security

This refers to the ability of a storage system to support a diverse variety of workloads, isolate workloads from each other, and provide user access controls and auditing capabilities that log changes to the system configuration.

Storage Efficiency

This refers to raw versus usable capacity; efficiency of data protection algorithms; and a platform’s ability to support storage efficiency technologies, such as compression, deduplication, thin provisioning and autotiering, to improve usage rates, while reducing storage acquisition costs and TCO.

Use Cases


The overall use case is a generalized usage scenario and does not represent the ways specific users will utilize or deploy technologies or services in their enterprises.


This simplifies storage management and disaster recovery, and improves economies of scale by consolidating multiple, potentially dissimilar storage into fewer, larger systems.

RAS, performance, scalability, and multitenancy and security are heavily weighted selection criteria, because the system becomes a shared resource, which magnifies the effects of outages and performance bottlenecks.


This use case is affiliated with business-critical applications (e.g., DBMSs) that need 24/7 availability and subsecond transaction responses.

Hence, the greatest emphasis is on RAS and performance features, followed by snapshots and replication, which enable rapid recovery from data corruption problems and technology or site failures. Manageability, scalability and storage efficiency are important, because they enable the storage system to scale with data growth, while staying within budget constraints.

Server Virtualization and VDI

This use case encompasses business-critical applications, back-office and batch workloads, and development.

The need to deliver I/O response times of 2 milliseconds (MS) or less to large numbers of VMs or desktops that generate cache unfriendly workloads, while providing 24/7 availability, heavily weights performance and storage efficiency, followed closely by multitenancy and security. The heavy reliance on SSDs, autotiering, QoS features that prioritize and throttle I/O, and disaster recovery solutions that are tightly integrated with virtualization software also make RAS and manageability important criteria.


This applies to storage consumed by big data applications using map/reduce technology, and packaged business intelligence (BI) applications for domain or business problems.

Performance (more specifically, bandwidth), RAS and snapshot capabilities are critical to success: RAS features to tolerate disk failures; snapshots to facilitate check-pointing, long-running applications; and bandwidth to reduce time to insight (see definition in "Hype Cycle for Analytic Applications, 2013").


This use case applies to storage arrays in private, hybrid and public cloud infrastructures, and how they apply to the cost, scale, manageability and performance requirements.

Hence, scalability, multitenancy and resiliency are important selection considerations, and are highly weighted.

Inclusion Criteria

This research evaluates the midrange, general-purpose storage systems supporting the use cases assessed in Table 2.

Table 2. Weighting for Critical Capabilities in Use Cases
Critical Capabilities Overall Consolidation OLTP Server Virtualization and VDI Analytics Cloud
Manageability 11% 10% 10% 10% 10% 15%
RAS 17% 18% 25% 12% 15% 15%
Performance 18% 15% 25% 20% 20% 10%
Snapshot and Replication 11% 10% 10% 9% 15% 10%
Scalability 13% 15% 10% 9% 15% 20%
Ecosystem 5% 5% 5% 5% 5% 5%
Multitenancy and Security 12% 15% 5% 15% 10% 15%
Storage Efficiency 13% 12% 10% 20% 10% 10%
Total 100% 100% 100% 100% 100% 100%
As of November 2014

Source: Gartner (November 2014)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

The 16 arrays selected for inclusion in this research are offered by the vendors discussed in Gartner’s "Magic Quadrant for General-Purpose Disk Arrays," which includes arrays that support block and/or file protocols. Here are the criteria that must be met for classification as a midrange storage array:

  • Single electronics failures:
    • Are not single points of failure (SPOFs)
    • Do not result in loss of data integrity or accessibility
    • Can affect more than 25% of the array’s performance/throughput
    • Can be visible to the SAN and connected application servers
  • Microcode updates:
    • Can be disruptive
    • Can affect more than 25% of the array’s performance/throughput
  • Repair activities and capacity upgrades:
    • Can be disruptive
  • Have an average selling price of more than $24,999

The criteria for qualification as a high-end array are more severe than those for midrange arrays. For this reason, arrays that satisfy the high-end criteria also satisfy the midrange criteria, but are included in the high-end Critical Capabilities research, rather than here.

For the reader’s convenience, high-end array criteria are shown below:

  • Single electronics failures are:
    • Invisible to the SAN and connected application servers
    • Affect less than 25% of the array’s performance/throughput
  • Microcode updates are:
    • Nondisruptive and can be nondisruptively backed out
    • Affect less than 25% of the array’s performance/throughput
  • Repair activities and capacity upgrades are:
    • Invisible to the SAN and connected application servers
    • Affect less than 50% of the array’s performance/throughput
  • Support dynamic load balancing
  • Support local replication and remote replication
  • Typical high-end disk array ASPs cost more than $250,000
Table 3. Product/Service Rating on Critical Capabilities
Product or Service Ratings Dell Compellent Dot Hill AssuredSAN 4000/Pro 5000 EMC VNX Series Fujitsu Eternus DX500 S3/DX600 S3 HDS HUS 100 Series HP 3PAR StoreServ Huawei OceanStor 5000/6000 Series IBM Storwize V7000 NEC M-Series NetApp E-Series NetApp FAS8020/8040 Nimble Storage CS-Series Oracle Sun ZFS Storage Appliance Tegile IntelliFlash Tintri VMstore X-IO Technologies ISE Storage Systems
Manageability 3.8 3.3 3.5 3.3 3.3 4.0 3.2 3.3 3.0 3.3 4.0 4.5 3.5 3.8 4.2 3.7
RAS 3.5 3.7 3.5 4.0 4.2 4.2 3.7 3.3 3.8 3.5 3.8 3.7 3.3 3.5 3.7 4.8
Performance 3.7 3.5 3.5 3.7 3.2 3.8 3.7 3.5 3.2 3.5 3.7 3.7 3.7 3.8 3.7 3.8
Snapshot and Replication 3.5 3.0 3.5 3.7 3.7 3.7 3.7 3.8 3.3 3.3 3.8 3.3 3.7 3.5 3.7 1.8
Scalability 3.3 2.2 3.7 3.3 3.3 3.2 4.0 3.3 3.7 3.0 4.0 3.5 4.0 3.0 2.8 3.0
Ecosystem 3.7 3.3 4.2 3.8 4.0 4.0 3.5 3.7 3.2 3.2 4.2 3.3 2.7 3.3 2.7 3.2
Multitenancy and Security 3.2 2.7 3.7 3.2 3.7 4.2 2.8 3.7 2.8 3.3 4.2 3.2 3.3 3.3 3.3 2.7
Storage Efficiency 4.0 3.3 3.8 3.3 3.2 3.7 3.2 4.0 2.8 3.2 4.0 3.7 3.7 4.3 4.0 2.3
As of November 2014

Source: Gartner (November 2014)

Table 4 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 4. Product Score in Use Cases
Use Cases Dell Compellent Dot Hill AssuredSAN 4000/Pro 5000 EMC VNX Series Fujitsu Eternus DX500 S3/DX600 S3 HDS HUS 100 Series HP 3PAR StoreServ Huawei OceanStor 5000/6000 Series IBM Storwize V7000 NEC M-Series NetApp E-Series NetApp FAS8020/8040 Nimble Storage CS-Series Oracle Sun ZFS Storage Appliance Tegile IntelliFlash Tintri VMstore X-IO Technologies ISE Storage Systems
Overall 3.58 3.16 3.62 3.55 3.55 3.85 3.50 3.55 3.26 3.31 3.92 3.64 3.55 3.59 3.58 3.28
Consolidation 3.56 3.12 3.63 3.54 3.57 3.85 3.49 3.54 3.27 3.30 3.94 3.62 3.54 3.56 3.54 3.28
OLTP 3.61 3.28 3.60 3.64 3.59 3.87 3.58 3.51 3.33 3.36 3.88 3.68 3.54 3.62 3.62 3.53
Server Virtualization and VDI 3.62 3.17 3.64 3.51 3.50 3.86 3.43 3.61 3.17 3.31 3.94 3.63 3.55 3.67 3.62 3.16
Analytics 3.57 3.13 3.62 3.56 3.54 3.82 3.55 3.55 3.28 3.31 3.91 3.62 3.58 3.57 3.56 3.23
Cloud 3.54 3.04 3.64 3.50 3.55 3.82 3.49 3.52 3.27 3.28 3.96 3.65 3.56 3.52 3.52 3.23
As of November 2014

Source: Gartner (November 2014)

To determine an overall score for each product/service in the use cases, multiply the ratings in Table 3 by the weightings shown in Table 2.