Magic Quadrant for Solid-State Arrays – 28 August 2014

Figure 1. Magic Quadrant for Solid-State Arrays

28 August 2014 ID:G00260420

Analyst(s): Valdis Filks, Joseph Unsworth, Arun Chandrasekaran


Solid-state arrays provide performance levels an order of magnitude faster than disk-based storage arrays at competitive prices per GB, enabled by in-line data reduction and lower-cost NAND SSD. This Magic Quadrant will help IT leaders better understand SSA vendors’ positioning.


Market Definition/Description

This Magic Quadrant covers SSA vendors that offer dedicated SSA product lines positioned and marketed with specific model numbers, which cannot be used as, upgraded or converted to general-purpose or hybrid storage arrays. SSA is a new subcategory of the broader external controller-based (ECB) storage market.

Considering the potential disruptive nature of SSAs on the general-purpose ECB disk storage market, Gartner has elected to report only on vendors that qualify as an SSA. We do not consider solid-state drive (SSD)-only general-purpose disk array configurations in this research. To meet these inclusion criteria, SSA vendors must have a dedicated model and name, and the product cannot be configured with hard-disk drives (HDDs) at any time. These systems typically (but not always) include an OS and data management software optimized for solid-state technology.

Magic Quadrant

Source: Gartner (August 2014)

A vendor’s position on the Magic Quadrant should not be equated with its product’s attractiveness or suitability for every client’s requirements. If the solutions better fit your needs, have the appropriate support capabilities and are attractively priced, then it is perfectly acceptable to acquire solutions from vendors that are not in the Leaders quadrant.

Vendor Strengths and Cautions


Cisco entered the SSA market through the acquisition of Whiptail in 2013. Whiptail had launched its product family in 2012. Cisco has incorporated the product family and re-engineered into the Cisco UCS Invicta Series. The portfolio consists of the UCS Invicta appliance and UCS Invicta Scaling System products. The UCS Invicta is a 2U array, while the scaling system can scale up to six nodes. Through the acquisition of Whiptail, Cisco is aiming to deliver a tightly coupled, high-performance flash-memory-based technology to complement UCS fabric-based infrastructure. Whiptail customers will continue to be supported by Cisco. However, the new product is undergoing a significant refresh that standardizes on Cisco hardware designs and administration software to better integrate with UCS compute and management tools.


  • The Invicta product line has a modular and extensible scale-out architecture, which provides implementation flexibility to customers in consolidating and converging workloads.
  • The Cisco UCS Director integration for the Invicta product family will enable Cisco customers to gain better operational simplicity.
  • Whiptail customers will benefit from Cisco’s deep technology partnerships with key independent software vendors (ISVs) that will result in more validated designs and reference guides.


  • Product delays and changing position statements are expected with the Invicta product, because it is going through a transition and conflicts with Cisco alliances, such as the EMC VCE and NetApp FlexPod.
  • Cisco currently has a relatively small professional services and support team dedicated to SSAs, with a limited presence outside the U.S.
  • Cisco has been slow and reticent in providing guidance on how these products will integrate and be managed within the UCS fabric postacquisition.


EMC has two SSD-based products in the SSA market: (1) the XtremIO scale-out technology, which EMC acquired in May 2012; and (2) the VNX-F array, which is based on the traditional general-purpose VNX unified storage array and exploits the proven VNX HDD-based hardware controllers and software. Both offerings are positioned and sold as dedicated SSAs. EMC has a large and relatively loyal installed base for the XtremIO products. EMC has a significant and broad, but overlapping, SSD product portfolio. The portfolio will be enhanced by EMC’s acquisition of DSSD and its technology, which will initially be positioned as an extreme performance networked appliance. EMC has been a vocal visionary concerning SSD for more than a decade, but its market-leading messaging has outpaced some of its product introductions. Compared with competitor SSAs, the XtremIO product was late to market and became generally available only in November 2013. With a concrete offering, XtremIO, together with VNX-F, has enabled EMC to grab the No. 4 market share position in the SSA segment for 2013. EMC has gained traction for the XtremIO product, and has continued its momentum through 1H14 via concerted sales efforts and competitive pricing.


  • EMC has a highly successful global sales force, exceptional marketing, and highly rated support and maintenance capability.
  • Large and loyal EMC customers have been provided with early products and attractive competitive introductory pricing. These customers can expect beneficial purchase terms.
  • XtremIO offers inclusive software pricing, and customers do not have to budget, track or purchase extra licenses when capacity is upgraded.


  • EMC is offering XtremIO at competitive prices to its installed base, but transparency of information (such as list prices, discount levels and independent performance benchmarks) is unavailable. To avoid hidden future costs, customers should fix all XtremIO purchases and upgrades at these competitive introductory prices.
  • VNX-F includes data reduction in the base system price. Unlike XtremIO and most competitors’ offerings, VNX-F still uses a traditional licensing structure, which requires customers to pay additional support and license charges for other upgrades and extra features (such as data protection suite).
  • While XtremIO’s product integration with ViPR has been announced, it is not currently available. Given the product overlap between the XtremIO and VNX-F products, operational and administration complexity is an issue.


HP is one of the late entrants into the SSA market, with availability of its HP 3PAR StoreServ 7450 model in June 2013. While HP is relatively new to the SSA market with its own product, it had an OEM partnership with Violin Memory, which ended in late 2011, in favor of HP’s organic approach. The 3PAR storage architecture is sufficiently flexible to exploit SSD media, complete with purpose-built SSA features. Compared with EMC and IBM, HP has not aggressively marketed, sold and generally mined its installed base. HP has almost entirely leveraged its 3PAR hardware architecture and management platform, but has made some important enhancements centered on efficiently maximizing the resident SSD technology. This affords HP a cost-effective approach, as well as robust reliability that can be supported with solid warranty terms, including a five-year SSD warranty and six 9s (99.9999%) of availability guarantees for four-node deployments.


  • HP has leveraged its hardware and storage software design, which are sufficiently modern and flexible enough to accommodate the nuances of solid-state technology and to implement new data reduction services.
  • HP 3PAR StoreServ 7450 offers a proven compatibility matrix for a broad variety of application workloads, cost-effective thin provisioning, and a familiar interface for customers, as well as a scale-out architecture.
  • HP has an extensive channel presence, global sales ability and a substantial customer base that is complemented with worldwide support and service capabilities.


  • Customers need to request more evidence to demonstrate the ROI to distinguish its product functionality and capability from other SSA and general-purpose arrays.
  • Despite the familiarity gained by HP’s leveraging its storage architecture, its media reporting abilities need further refinement.
  • Some client references have had limited visibility into HP’s SSA product strategy, and HP and its partners have limited mind share in the market.


Huawei was an early entrant in the SSA market, with the launch of OceanStor Dorado in mid-2011, when it was a joint venture with Symantec. Since then, Huawei has acquired Symantec’s stake, announced successive generations, maintained the investment and expanded the product line. Huawei has an aggressive sales approach, offering steep discounts off the list price for qualified enterprise customers. Its maintenance and support pricing (as a percentage of capital expenditure [capex]) tends to be lower than many competitors’ pricing and is backed by a large postsales support team concentrated in Asia/Pacific. To further improve the transparency and competitiveness of its SSA products, Huawei has been aggressive in submitting performance details to public performance benchmarks (such as the Storage Performance Council SPC-1).


  • Huawei is a large, profitable enterprise storage vendor, offering customers a well-rounded storage portfolio in emerging markets.
  • Huawei has committed significant R&D dedicated to SSAs, which has resulted in the design and development of its application-specific integrated circuit (ASIC)-based SSD controllers, SSDs and software capabilities.
  • The Dorado product family delivers competitive pricing and performance, and supports a large ecosystem of ISVs, including commonplace hypervisors and VMware APIs.


  • Huawei’s reseller network, professional services and support capabilities in the U.S. tend to be weak, due to brand perception and execution challenges.
  • Pricing is still on an a la carte basis, charging for individual data service modules, while most other vendors are gravitating toward unified, all-inclusive base pricing.
  • Huawei’s channel partner ecosystem continues to be weak, which presents challenges for enterprise customers looking for detailed workload profiling, multisite implementations and best-practice guidance.


IBM acquired Texas Memory Systems (TMS) in September 2012, and subsequently announced in April 2013 that it would invest $1 billion into all aspects of flash (SSD) storage technology. IBM has leveraged its storage technology, specifically Storwize compression software and the IBM SAN Volume Controller (SVC) layer, which has been placed on top of the FlashSystem array to provide high-level data services. TMS had a successful track record of producing low-latency storage using DRAM for over 30 years, and using flash-based storage for nearly 10 years. The IBM-engineered FlashSystem products are available as a stand-alone storage enclosure — the FlashSystem 840 — which has limited software features. In March 2014, IBM made available the FlashSystem V840, which is the storage enclosure combined with the FlashSystem control enclosure, to provide data services such as compression, mirroring, thin provisioning and replication. This usage of the SVC for the FlashSystem control enclosure follows a pattern within IBM’s storage division, where the SVC is placed on top of many IBM products (such as the DS8000, Storwize V7000 and XIV storage arrays) to provide a common and interoperable platform abstracting the diverse products beneath it, an approach that has internal cost and reuse advantages. However, with such a diverse number of devices, the complexity of managing compatibility, fixes, and software and hardware regression testing between an exponentially increasing number of software and hardware platforms increases dependencies among product lines. Basic storage controller features — such as redundant array of independent disks (RAID), hot code load, controller failover, port failover, caching and administration software — are duplicated in the storage enclosure (FlashSystem 840) and the control enclosure (SVC). Compared with competitors, IBM charges separately for higher-level features such as compression.


  • Within the SSA market, the TMS platform has one of the longest proven track records with respect to array performance.
  • There is a quick and short learning curve for IBM Storwize V7000 and SVC customers, because the same SVC-based management interface is used on many other key IBM storage product lines.
  • IBM has successfully exploited its system company advantage and has cross-sold the FlashSystem into its customer base through direct and indirect channel incentives and bundling discounts with SVC.


  • Compared with the FlashSystem V840, the FlashSystem 840 has limited data services and will require IBM or non-IBM virtualization products for data services.
  • The FlashSystem 840 is dependent on the SVC product line to provide data services, such as compression, thin provisioning, snapshots and mirroring, among other features, for additional costs.
  • Clients starting with the FlashSystem 840 that later decide they require extra storage features will need to purchase extra SVC-based hardware. This increases the operating expenditure (opex) considerations (such as wiring, power, cooling and physical rack space requirements) compared with the FlashSystem 840 by itself.


Kaminario was founded in 2008 and is headquartered in Newton, Massachusetts, but product development is concentrated in Israel. Kaminario is one of the more resilient SSA vendors, and has been in the market with a shipping product for more than three years. It is on its fifth-generation product. The Kaminario K2 product has undergone several reincarnations of its system features in hardware and, most recently, data management software, as it has migrated from its initial DRAM appliance approach in 2011. Kaminario performs well across many public benchmarks, which is appealing given its ability to scale out and scale up. With only recent successful marketing efforts, many companies are unaware of Kaminario, because it lacks market awareness and mind share compared with the established storage vendors and some of the new startups.


  • Kaminario has an advantageous scale-up/scale-out architecture that utilizes flexible storage efficiency and resiliency technologies to maximize cost structure and SSD longevity.
  • The vendor has been providing customers with a guarantee program for an average of $2 per GB effective capacity and a seven-year unconditional SSD endurance warranty, which has helped promote customer confidence in Kaminario.
  • Kaminario offers strong R&D and engineering support, with key technologies protected by 34 patents as of June 2014.


  • The vendor’s presence is concentrated in the U.S. and Europe for sales and support coverage.
  • As a relatively small organization, Kaminario has limited marketing ability to gain mind share, which is important in order to expand its sales channel bandwidth and long-term viability.
  • Like most startups, Kaminario is not currently profitable, and will require another round of funding to sustain itself.


NetApp announced the first EF array model in February 2013, and updated it with the EF550 in November 2013, helping continue its product momentum. Compared with smaller SSA startups, NetApp was a late entrant to the SSA market. However, NetApp was able to reuse existing products and technology, as the EF Series is based on the mature E Series hardware and the SANtricity platform acquired from the acquisition of LSI’s Engenio business. This has led to an intricately managed positioning and sales challenge between the EF and FAS products. The EF Series is targeted at workloads that need high performance. Unlike the FAS Series, the EF Series is primarily sold through a direct sales force. NetApp’s customers and prospects can elect to deploy the EF Series, choose the recently productized All-Flash FAS offerings, or wait for the launch of FlashRay in late 2014. Although FlashRay has been delayed thus far, NetApp claims it will be a dedicated SSA product built from the ground up and optimized for SSD technology.


  • NetApp has a deep understanding of SSDs. Its diverse portfolio of SSD offerings features good workload analysis tools that can profile applications and match them to the right products, helping customers rightsize their environments from several perspectives: reliability, availability, serviceability, manageability and performance.
  • With the EF Series, NetApp has changed its pricing structure to an all-inclusive one, which simplifies license management during upgrades and long-term budgeting.
  • The EF Series provides support for a wide variety of high-speed interconnect protocols, including FC, Internet SCSI (iSCSI), SAS and InfiniBand.


  • With the scheduled launch of FlashRay, which has been in development for more than two years, the EF Series needs to compete for product development, marketing and sales dollars within NetApp, which raises questions about the long-term viability of the EF Series product line.
  • The EF Series uses more reliable, but more expensive, enterprise-grade SSD (single-level cell [SLC] and enterprise multilevel cell [eMLC]) and, given the lack of any data reduction capabilities, it may not be cost-competitive for diverse workloads.
  • The EF Series has a complex graphical user interface (GUI), compared with newer designs from competitors, and Ontap/FAS customers will require new skills to operate and administer the EF Series.

Nimbus Data

Nimbus Data was founded in 2006, and is headquartered in San Francisco, California. The vendor has taken a vertically integrated approach in terms of hardware and software to deliver dense, cost-effective arrays that appeal to a variety of customers and application workloads. Many of the vendor’s initial deployments came from a concentrated customer base that included several hyperscale customers. Nimbus Data doubled its revenue in 2013 year over year, but has suffered from high employee churn and skepticism among some companies in the market. It continues to deliver public case studies and references to improve customer perception. Ultimately, it will need to be more transparent about its business operations, and to scale its business to capably meet future customer needs for sales and support across key geographies.


  • Nimbus Data has an aggressive pricing strategy predicated on advanced SSD memory and density enabled by a vertically integrated hardware approach.
  • Its offering has broad workload applicability, with multiprotocol support, all-inclusive software pricing and a comprehensive data service feature set appealing to a diverse customer set, ranging from Web scale to conventional data center environments.
  • Nimbus Data claims to have a profitable business since 2013, and, with no external funding, has been able to navigate its direction without influence from investors.


  • The vendor has a thin (streamlined) management team, with limited scalability, succession resources and responsibility-sharing abilities, and executive decision making driven from a highly centralized top-down approach, which is problematic for long-term viability.
  • Sales and product support and services are limited, and provided from a relatively small organization with selective geographic penetration.
  • Nimbus Data’s business model is based on large, performance-oriented accounts with a limited ability to grow into a diversified customer base in terms of revenue share, creating viability concerns due to customer concentration.

Pure Storage

Pure Storage was founded in 2009 with a business plan to create a new, dedicated SSA and to grow organically, rather than to achieve quick wins or the largest market share. The vendor’s business model was not to be first to market, but to be a more financially stable and sustainable long-term business. This business model has been successful to date, and Pure Storage has managed to gain significant investments, thereby achieving financial stability. It signed a cross-licensing deal with IBM to protect itself with key storage system intellectual property (IP), and has a go-to-market strategy stimulated by an aggressive channel partner program. Pure Storage has a relatively mature platform — the FA-400 Series — and a proven data reduction implementation. A transparent attitude toward pricing and guaranteed efficiency has achieved significant mind share and attention in the SSA market, promoted via creative, but poignant, marketing campaigns. Similarly, innovative and competitive inclusive software licensing and inclusive controller upgrade programs (offered when customers pay full support and maintenance costs) have proven to be a fresh and welcome approach that challenges and disrupts the established incumbent SSA and general-purpose disk array vendors’ license schemes and forklift product replacement cycles.


  • Pure Storage has a solid financial base supported by funding totaling more than $470 million to date. Its success and growth, combined with a unique culture, help attract world-class talent, with head count exceeding 680 as of August 2014 — all of which helps negate near-term viability concerns.
  • The vendor has an efficient product cost structure based on low-cost consumer MLC (cMLC) PC SSDs.
  • Innovative marketing, purchasing, trial and product renewal programs create a product that is simple to buy, install and manage.


  • Data reduction is not selectable, and there is relatively low usable capacity if the workload and data is not suitable for data reduction.
  • The vendor can be outperformed in the highest input/output (I/O) and low-latency environments.
  • The vendor takes a traditional scale-up approach, with limited raw capacity scalability and a large physical footprint.


The single-controller Skyera skyHawk platform became available in April 2014. Because Skyera is not using existing enterprise SSDs or components and has had challenges delivering products to market on time, it still does not have a standard high-availability dual-controller array. However, the vendor has been a thought leader, challenging the established incumbent disk array hegemony, and is an innovative visionary in the industry, creating a purpose-built system designed from the SSD chip level upward by exploiting the most cost-effective, advanced SSD memory technology. This unique hardware approach enables Skyera to drive down SSA costs to levels that compete with general-purpose disk arrays. Data reduction is in the form of compression, which further improves storage utilization and the usable cost per GB. The next-generation skyEagle system will have more high-availability data center hardware architecture, such as dual-power supplies and controllers. Skyera is a probable acquisition target, even though it has considerable strategic investment, including public investors Dell and Western Digital (WD), among others.


  • Skyera has a low-cost-oriented value proposition that debunks the premise of expensive SSAs.
  • The vendor provides a solid value proposition, with good remote support and high precompression density per form factor, with 57TB raw capacity, which is 44TB formatted, but before data reduction, per 1 rack unit at half depth.
  • Skyera offers an unconditional warranty for system replacement.


  • The skyHawk can only be used in high-availability environments if a storage virtualization layer is used to provide high-level abstraction features to mirror data and to provide failover between two separate skyHawk arrays.
  • Data management software is limited in the skyHawk. Most software features are included in the base price, except for compression, which is separately licensed.
  • Companies looking for long-term viability should realize that Skyera has a limited customer base and limited product revenue, and is actively pursuing another round of funding since its last round in February 2013.


SolidFire is a privately held, venture-capital-funded company that makes scale-out SSAs. SolidFire is an emerging company that is not yet profitable and with a product that has been in general availability for less than two years. Its SF Series product line became available in November 2012. SolidFire’s initial focus was on service providers offering high-performance infrastructure as a service (IaaS) and, while this segment still continues to be a key focus, recent product launches and go-to-market initiatives have widened the focus toward enterprise buyers. SolidFire is highly differentiated from its competitors through its scale-out capabilities, rich software features and ability to guarantee storage performance. Management of the platform is built around the Web-scale principles of automation, quality of service (QoS) and API-based access. The product has close integration with cloud management platforms, such as OpenStack, CloudStack and VMware vCloud suite. Pricing is simple, all-inclusive and appeals to traditional enterprise users.


  • SolidFire’s ability to deliver high scalability in capacity and performance makes it an attractive platform for running next-generation cloud and big data workloads.
  • SolidFire puts a high degree of emphasis on keeping costs low through usage of cMLC-based PC SSDs and no-charge data reduction features, such as compression and deduplication that are always turned on, and operating in-line.
  • The QoS and multitenancy allow customers to run multiple workloads in isolation with guaranteed performance, eliminating disruption or degradation from unwieldy workloads.


  • The initial acquisition costs are high, even for SolidFire’s low-end platforms, given that there needs to be at least four nodes in a cluster.
  • SolidFire has limited field services and support personnel outside the U.S. and the U.K.
  • Given that a high portion of its revenue is generated from a direct sales force, enterprise customers need to be cautious regarding the availability of reseller partners for implementation and support.

Violin Memory

Violin Memory is a pioneer in the SSA industry. Founded in 2005, it has earned revenue since 2010. The vendor’s foundation has been through its hardware approach, predicated on SSD-chip-level system expertise founded on aggregating removable Peripheral Component Interconnect Express (PCIe) dual in-line memory modules (DIMMs). This approach enables Violin Memory to offer a high-performance, resilient system featuring one of the most competitive pricing structures on the market, due to its strong relationship with SSD memory manufacturer Toshiba. However, Violin Memory had financial challenges since its initial public offering (IPO) in September 2013, when its disappointing sales and financial outlook forced the company to take drastic action. A fresh, new management team has been in place since early 2014. It has refocused on its core customers by paring back its sales force and pursuing a channel approach targeted at key geographies. Violin Memory has been trying to exploit software from its acquisition of GridIron Systems in 2013, in an effort to complement its hardware with a portfolio of native data management software features that debuted in late June 2013. Violin Memory divested its PCIe SSD business for $23 million in June 2014.


  • Violin Memory sources directly from SSD memory suppliers and its lead investor Toshiba to exploit hardware in terms of performance, density and price that translates to aggressive, final system prices for customers.
  • The 6000 series is available via several partnerships, such as reseller relationships with Dell, Fujitsu, Toshiba and NEC, and at the application level with Microsoft to deliver an optimized Windows Flash Array with software features tuned to Microsoft database, Server Message Block (SMB) and Network File System (NFS) environments.
  • The new management team is executing on a clear vision that eliminates distractions and proactively addresses customer, partner and investor needs.


  • Violin Memory recently debuted a more cohesive data management software service strategy with its Concerto release, which appears promising, but relatively untested.
  • Violin Memory has capable U.S. sales, support and services, but limited direct international sales. It will have to realign with channel partners to expand, which will take time and could complicate efforts for small or midsize businesses (SMBs).
  • Violin Memory’s financial stability, primarily the rate that it is burning cash, is a reason for caution. The vendor is likely to be an acquisition target if its profitability does not improve in 2015.

Vendors Added and Dropped

We review and adjust our inclusion criteria for Magic Quadrants and MarketScopes as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant or MarketScope may change over time. A vendor’s appearance in a Magic Quadrant or MarketScope one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.


This is a new Magic Quadrant


This is a new Magic Quadrant

Inclusion and Exclusion Criteria

To be included in the Magic Quadrant for SSA, a vendor must:

  • Offer a self-contained, SSD-only system that has a dedicated model name and model number (see Note 1).
  • Have an SSD-only system. It must be initially sold with 100% SSD and cannot be reconfigured, expanded or upgraded at any point with any form of HDDs within expansion trays via any vendor’s special upgrade, specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.
  • Sell its product as a stand-alone product, without the requirement to bundle it with other vendors’ storage products in order to be implemented in production.
  • Provide at least five references that Gartner can interview. There must be at least one client reference from Asia/Pacific, EMEA and North America, or the two geographies within which the vendor has a presence.
  • Provide an enterprise-class support and maintenance service, offering 24/7 customer support (including phone support). This can be provided via other service organizations or channel partners.
  • Have established notable market presence as demonstrated by the amount of PBs sold, number of clients or significant revenue.

The product and a service capability must be available in at least two of the following markets — Asia/Pacific, EMEA and North America — via direct or channel sales. Availability does not include hybrid (SSD, HDD) storage arrays.

The SSAs evaluated in this research include scale-up, scale-out and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, and forecast growth rates and asset management strategies.

While the SSA Magic Quadrant represents vendors whose dedicated systems meet our inclusion criteria, ultimately, it is the application workload that governs which solutions you should consider, regardless of any criteria.

Other vendors and products were considered for the Magic Quadrant but did not meet the inclusion criteria, despite offering SSD-only configuration options to existing products. These vendors and/or specific products may warrant investigation based on your application workload needs for their SSD-only offerings:

  • American Megatrends (AMI)
  • Dell Compellent Storage Solutions
  • Fujitsu Eternus DX200F
  • Fusion-io ION (acquired by SanDisk)
  • Hitachi Unified Storage (HUS) VM
  • IBM DS8000
  • NetApp FAS
  • Oracle ZFS
  • Tegile T-Series

Evaluation Criteria

Ability to Execute

We analyze the vendor’s capabilities across broad business functions. Vendors that have expanded their products across a wider range of use cases and applications, improved their service and support capabilities, and focused on improving mission-critical applications will be more highly rated in the Magic Quadrant analysis. Ability to Execute reflects the market conditions and, to a large degree, it is our analysis and interpretation of what we hear from the market. Our focus is assessing how a vendor participates in the day-to-day activities of the market.

Product or Service evaluates the capabilities of the products or solutions offered to the market. Key items to be considered for the SSA market are how well the products and/or services address enterprise use case needs, the critical capabilities of the product (see “Critical Capabilities for Solid State Arrays”) and breadth of product and/or solutions.

Overall Viability includes an assessment of the organization’s financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue to invest in the product, offer the product and advance the state of the art in the organization’s product portfolio.

Sales Execution/Pricing looks at the vendor’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support and the overall effectiveness of the sales channel.

Market Responsiveness/Record focuses on the vendor’s capability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the provider’s history of responsiveness.

Marketing Execution directly leads to unaided awareness (i.e., Gartner end users mentioned the vendor without being prompted) and a vendor’s ability to be considered by the marketplace. Vendor references, Gartner’s inquiries and end-user client search analytics results are factored in as a demonstration of vendor awareness and interest.

Customer Experience looks at a vendor’s capability to deal with postsales issues. Because of the specialized nature of the cloud storage market and the mission-critical nature of many of the storage environments, vendors are expected to escalate and respond to issues in a timely fashion with dedicated and specialized resources, and to have relevant detailed expertise. Another consideration is a vendor’s ability to deal with increasing global demands. Additional support tools and programs are indications of a maturing approach to the market.

Operations considers the ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.

Table 1. Ability to Execute Evaluation Criteria
Evaluation Criteria Weighting
Product or Service High
Overall Viability High
Sales Execution/Pricing Medium
Market Responsiveness/Record High
Marketing Execution Medium
Customer Experience Medium
Operations Medium

Source: Gartner (August 2014)

Completeness of Vision

Completeness of Vision distills a vendor’s view of the future, the direction of the market and the vendor’s role in shaping that market. We expect the vendor’s vision to be compatible with our view of the market’s evolution. A vendor’s vision of the evolution of the data center and the expanding role of SSAs are important criteria. In contrast with how we measure Ability to Execute criteria, the rating for Completeness of Vision is based on direct vendor interactions, and on our analysis of the vendor’s view of the future.

Market Understanding looks at the technology provider’s capability to understand buyers’ needs, and to translate those needs into an evolving road map of products and services. Vendors show the highest degree of vision, listen to and understand buyers’ wants and needs, and can shape or enhance those wants and needs with their added vision.

Marketing Strategy relates to what vendor solution message is described, how that message is communicated, what vehicles are used to effectively deliver it, and how well the buying public resonates with and remembers the message. In a market where many vendors and/or products can sound the same, or sometimes not even be known, message differentiation and overall awareness are vital.

Sales Strategy considers the strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.

Offering (Product) Strategy looks at a vendor’s product road map and architecture, which we map against our view of enterprise requirements. We expect product direction to focus on catering to emerging enterprise use cases for solid state arrays.

Business Model assesses a vendor’s approach to the market. Does the vendor have an approach that enables it to scale the elements of its business (for example, development, sales/distribution and manufacturing) cost-effectively, from startup to maturity? Does the vendor understand how to leverage key assets to grow profitably? Can it gain additional revenue by charging separately for optional, high-value features? Other key attributes in this market are reflected in how the vendor uses partnerships to increase sales. The ability to build strong partnerships with a broad range of technology partners and associated system integrators demonstrates leadership.

Vertical/Industry Strategy measures the vendor’s strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.

Innovation measures a vendor’s ability to move the market into new solution areas, and to define and deliver new technologies. In the SSA market, innovation is key to meeting rapidly expanding requirements and to keeping ahead of new (and often more-agile) competitors.

Geographic Strategy measures the vendor’s ability to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.

Table 2. Completeness of Vision Evaluation Criteria
Evaluation Criteria Weighting
Market Understanding High
Marketing Strategy Medium
Sales Strategy Medium
Offering (Product) Strategy High
Business Model High
Vertical/Industry Strategy Low
Innovation High
Geographic Strategy Medium

Source: Gartner (August 2014)

Quadrant Descriptions


Vendors in the Leaders quadrant have the highest scores for their Ability to Execute and Completeness of Vision. A vendor in the Leaders quadrant has the market share, credibility, and marketing and sales capabilities needed to drive the acceptance of new technologies. These vendors demonstrate a clear understanding of market needs; they are innovators and thought leaders; and they have well-articulated plans that customers and prospects can use when designing their storage infrastructures and strategies. In addition, they have a presence in the five major geographical regions, consistent financial performance and broad platform support.


Vendors in the Challengers quadrant participates in the SSA market and executes well enough to be a serious threat to vendors in the Leaders quadrant. They have strong products, as well as sufficient credible market position and resources to sustain continued growth. Financial viability is not an issue for vendors in the Challengers quadrant, but they lack the size and influence of vendors in the Leaders quadrant.


A vendor in the Visionaries quadrant delivers innovative products that address operationally or financially important end-user problems at a broad scale, but has not demonstrated the ability to capture market share or sustainable profitability. Visionary vendors are frequently privately held companies and acquisition targets for larger, established companies. The likelihood of acquisition often reduces the risks associated with installing their systems.

Niche Players

Vendors in the Niche Players quadrant often excel by focusing on specific market or vertical segments that are generally underpenetrated by the larger SSA vendors. This quadrant may also include vendors that are ramping up their SSA efforts, or larger vendors having difficulty in developing and executing upon their vision.


This Magic Quadrant represents vendors that sell into the enterprise end-user market with specific branded SSAs. An insatiable demand for storage also demands a more capable high-performance tier that can deliver low-latency storage more reliably in order to create tangible benefits. As high-performance storage demand explodes, it will require even more storage administration, emphasizing the perpetual need for storage efficiency, resiliency and manageability to counter this trend.

Market Overview

There has been a growing demand for SSAs to meet the low-latency performance requirements of enterprise- and Web-scale applications. Over the last decade, CPU performance has improved by an order of magnitude, while the performance of HDD within general-purpose storage arrays stagnated, an increasingly accentuating divergence. SSAs have corrected this imbalance by temporarily satiating the demand for storage performance. This has led to the quick and successful adoption of SSA, evidenced by the fact that the total revenue for SSA in 2013 was $667 million, with a huge year-over-year growth of 182%.

The SSA market witnessed a considerable uptake in adoption in 2013, fueled by significant and continued investments in startups and with established vendors opting to acquire emerging vendors, although some are still pursuing an organic approach to growth. Large incumbent system vendors, such as EMC, HP, IBM and NetApp, have been focused on cross selling their new SSA products to their established customers, thereby quickly obtaining large market shares. However, once this captive segment has been mined, a vendor’s ability to grow market share in the long term will be predicated on overall product ability, sales bandwidth and execution as it competes outside its installed base. Nearly half of the vendors in this Magic Quadrant have pursued a vertically integrated approach based upon direct procurement of SSD memory, with the remaining vendors choosing to outsource the SSD storage and procure functionality from external suppliers to focus on an SSD-optimized data management software strategy.

Between 2010 and 2012, most customers were interested primarily in high-performance and low-latency SSAs. Given the lack of available data management features, customers tolerated the feature shortcomings in favor of raw performance. As initial storage performance issues were capably addressed, customers wanted to address multiple application workloads that required a rich data management software portfolio consisting not only of storage efficiency and resiliency technologies purpose-built for SSAs, but also the underlying SSD memory technology. During 2013, we witnessed the advent of comprehensive data management software features, such as deduplication, compression, thin provisioning, snapshots and replication technologies that, when specifically tailored to SSD, can provide compelling benefits, particularly in application workloads that see favorable data reduction ratios. This trend of innovative and comprehensive data management software on the more mature SSA platforms has continued into 2014, and has started to permeate at the application level, which will drive the industry in 2015 and beyond. It is through the synergy of cost-effective hardware and purpose-built software that the industry will see further consolidation in order to reach maturation.

As this market matures and SSAs gain feature equivalency with general-purpose arrays, we expect decreasing differentiation between general-purpose storage arrays and SSAs. Vendors of general-purpose arrays product lines and server SSD cards have created specific array models full of SSD media. These models are tactical implementations that enable the vendors to market directly into the SSA segment, while they create longer-term strategies or create purpose-built SSAs. If these vendors maintain their investments in these general-purpose array SSD variations over a longer period and they prove not to be a viable tactical stopgap, they may need to create specific SSAs. The SSA market is distinct. It has matured from the early solid-state appliance offerings, because the data services provided are equivalent and, in certain cases (such as data reduction and administration), offer richer and improved features than general-purpose storage arrays. SSAs have matured to levels competitive with general-purpose storage arrays in all but scale. The average usable capacity of the SSA purchased is approximately 38TB. The preferred connection protocol is Fibre Channel: 63% of all SSAs attach to servers use Fibre Channel, and 33% use the iSCSI protocol. NFS and Common Internet File System (CIFS) attach are, therefore, rarely used. Online transaction processing (OLTP), analytics and server virtualization are the top three workloads that customers consider for SSAs, with virtual desktop infrastructure (VDI) being the fourth most popular workload. While a majority of SSA deployments are for a single workload, Gartner is seeing interest in converging multiple workloads on the same product, which, in many cases, are being enabled by features such as QoS.

Critical Capabilities for Solid-State Arrays

29 August 2014 ID:G00260421

Analyst(s): Valdis Filks, Joseph Unsworth, Arun Chandrasekaran


Solid-state arrays are capable of delivering significant improvements in performance, although high cost perceptions persist. This report analyzes 13 SSAs across six high-impact use cases and quantifies product attractiveness against seven critical capabilities that are important to IT leaders.



Key Findings

  • The most common use cases for solid-state arrays (SSAs) are online transaction processing (OLTP), analytics and virtual desktop infrastructure (VDI), with performance being an inordinately important factor in the selection.
  • SSAs are replacing high-end enterprise arrays configured for performance and are increasingly being used in business and mission-critical environments.
  • Although most organizations today have deployed SSAs in a silo for specific workloads, Gartner inquiries reveal a keen interest to harness them for multiple workloads, given the maturing data services.
  • The price gap between general-purpose storage arrays and SSAs is narrowing, particularly with products that exploit consumer-grade NAND flash/solid-state drives (SSDs) with in-line data reduction features.


  • Mitigate product immaturity concerns by choosing vendors that offer guarantees and unconditional warranties around availability, durability, performance and usable capacity.
  • Choose products that can deliver consistent performance across varying workloads, which are important in your current and future environment.
  • Use data reduction simulation tools to verify data reduction suitability for your data and workload.
  • Implement established SSAs in business and mission-critical environments because reliability has exceeded expectations.

What You Need to Know

Solid-state arrays are rapidly gaining adoption due to significant performance advantages that customers can gain. The products from late entrants are rapidly catching up, with features on par with general-purpose arrays and established SSAs. The SSA market is divided between several pure-play emerging vendors that have built up hardware and software capabilities optimized for SSD, while larger incumbent vendors are moving aggressively to stay relevant in this important market segment, through acquisitions and/or organic product development. Many vendors have chosen to take existing proven general-purpose disk array software operating systems and array hardware designs, and adapt these to fully dedicated SSAs, which are then marketed and sold as dedicated SSA products. While this is a quick and economical method of getting an SSA to market, many existing general-purpose storage arrays were not designed for or do not lend themselves to be used as SSA because they were tuned for HDDs. In some cases, a bifurcating product line, where some array models are tuned, maintained and patched for SSDs and others for HDD, can become a software development and patch consistency nightmare, leading to restricted product problem determination and development for customers.

SSAs are used to consolidate performance, with most customers preferring to use block protocols with these storage systems. The total cost of ownership (TCO) and storage utilization of an SSA is becoming cost-competitive with general-purpose storage arrays, especially when the workloads are suitable for data reduction and when the data reduction ratio is approximately 5-to-1. While performance benchmarks are important, many customers are moving beyond that to place a high degree of emphasis on features that can enhance SSD endurance and manageability, and deliver high availability on par with general-purpose systems and data reduction features that can reduce TCO. The performance gap between the leading SSA products is narrowing, which means customers can more closely consider data services, ecosystem, services and support as important factors during evaluation. Though the cost of SSDs is falling, only through in-line data reduction features can customers fully maximize the value of their SSD tier. In addition, data reduction features can extend the longevity of the SSD tier by reducing the volume of writes and erasures. Media reliability has not been an issue due to features such as wear leveling and better error correction methods, which are also making it possible to use consumer-grade NAND flash and PC SSDs in solid-state arrays, to lower acquisition costs.

Customers should recognize that this is a highly dynamic market with a great number of product features and upgrades announced in 2014. You should choose solutions that do not require extensive storage infrastructure changes and redesign, and that are backed by strong services and support with an ability to deliver product enhancements and new features.

Within five years, the expectation of consistent sub-500-μs storage I/O response times will become commonplace and will become a performance differentiator. Today, however, any SSA vendor that can improve general-purpose HDD disk array performance to submillisecond levels or by an order of magnitude has a valuable product differentiator. When customer and service-level expectations of 150 μs become the norm, a 50 μs to 100 μs performance difference will be significant criteria in purchasing decisions, but today, this level of difference is not important. Software, price, support and data reduction are more important than 0.1 millisecond (ms) performance differences. Conversely, most SSA vendors still emphasize and sell “speeds and feeds,” whereas features such as data reduction are more important.

Product rating evaluation criteria considerations include:

  • Product features considered for inclusion must have been in general availability by 30 July 2014 to be considered in the vendors’ product scores.
  • Ratings in this Critical Capabilities report should not be compared with other research documents, because the ratings are relative to the products analyzed in this report, not ratings in other documents.
  • Scoring for the seven critical capabilities and six use cases was derived from analyst research throughout the year and recent independent Gartner research on the SSA market. Each vendor responded in detail to a comprehensive, primary research questionnaire administered by the authors. Extensive follow-up interviews were conducted with all participating vendors, and reference checks were conducted with end users. This provided the objective process for considering the vendors’ suitability for the use cases.


Critical Capabilities Use-Case Graphics

Figure 1. Vendors’ Product Scores for Overall Use Case

Source: Gartner (August 2014)

Figure 2. Vendors’ Product Scores for Online Transaction Processing Use Case

Source: Gartner (August 2014)

Figure 3. Vendors’ Product Scores for Server Virtualization Use Case

Source: Gartner (August 2014)

Figure 4. Vendor Product Scores for Virtual Desktop Infrastructure Use Case

Source: Gartner (August 2014)

Figure 5. Vendors’ Product Scores for High-Performance Computing Use Case

Source: Gartner (August 2014)

Figure 6. Vendors’ Product Scores for Analytics Use Case

Source: Gartner (August 2014)


Cisco UCS Invicta Series

Cisco entered the solid-state array market through the acquisition of Whiptail in 2013. Cisco has completed the process of porting the Whiptail OS onto the Unified Computing System (UCS) hardware, with plans to integrate the administration of the array with UCS Manager. The product is also being rebranded as the Cisco UCS Invicta Series, replacing the previous Accela and Invicta product names. The product uses enterprise multilevel cell (eMLC) NAND SSD with in-line deduplication and thin provisioning. The product has support for FC and iSCSI with asynchronous replication capabilities, and recently announced snapshot support.

Hypervisor support is limited to VMware, and integration with other enterprise independent software vendors (ISVs) remains limited at this point. Public performance benchmarks aren’t widely available, leaving customers to use reference checks to verify claims of consistent performance. Microcode updates are disruptive, and the product currently lacks native encryption support. Customers purchasing the UCS and VCE integrated systems have the option of EMC or Cisco SSA and, therefore, will be able to leverage competing options to obtain the best purchase price.


The XtremIO product was designed from inception to efficiently use external SSDs and currently uses robust, but more costly, enterprise SAS eMLC SSDs to deliver sustained and consistent performance. It has a purpose-built, performance-optimized scale-out architecture that leverages content-based addressing to achieve inherent balance, always-on and in-line data reduction, optimal resource utilization in its storage layout, a flash-optimized data protection scheme called XDP, and a very modern, simple-to-use graphical user interface (GUI). XtremIO arrays presently scale out to six X-Bricks, with each X-Brick having dual controllers providing a total of 120TB of physical flash, measured before the space-saving benefits of thin provisioning, data reduction and space-efficient writable snapshots. The addition of nodes currently requires a system outage, and upgrades to some version 3 features such as compression will also require a disruptive upgrade, which EMC will mitigate with professional services to avoid interruptions to hosts and applications. Compared with similar EMC scale-out product architectures, such as the Isilon scale-out array, which stores data across nodes and therefore can sustain a node outage, in an XtremIO cluster, blocks of data cannot be accessed if a single X-Brick has a complete outage, such as a simultaneous loss of both controllers, because data is stored only once on a single X-Brick.


The lower-capacity 46TB VNX-F is based on the existing VNX unified general-purpose disk array. It has postprocess deduplication and a relatively more complex management interface due to the requirement to support the legacy (or its inherited) VNX architecture. We do not expect the VNX-F SSA and general-purpose VNX software and hardware architectures to diverge. As a result of the requirement for the software and hardware to support two different models/forks, new software features may take longer to become available due to the increased complexity of supporting two separate product lines and storage formats that use the same software stack and may require different fixes and firmware upgrades.

Different operational and administration GUIs between XtremIO and VNX-F and other products may require extra products to be purchased to provide a single management interface. Satisfaction guarantees and data service pricing for XtremIO data services are inclusive and simple, but customers need to buy data protection suite as separate package with the VNX-F.

HP 3PAR StoreServ 7450

The 7450 is based on the HP StoreServ general-purpose array architecture, which leverages HP’s proprietary application-specific integrated circuit (ASIC) and additional DRAM capacity. The design uses a memory-mapping look-up implementation similar to an operating system’s virtual to physical RAM translation, which is media-independent and lends itself well to virtual memory-mapping media such as SSDs. This is a particularly compelling attribute because of its efficient usage of the external SSDs by reducing the amount of overprovisioning required, as well as enabling a lean cost structure by leveraging consumer-grade MLC SSD. Another added benefit is maximizing SSD endurance as the granular, systemwide wear leveling extends the durability of the less reliable consumer MLC (cMLC) SSD media. Due to the media-independent memory-mapping 3PAR storage software architecture, which is implemented on SSD and general-purpose array models, we do not expect a software bifurcation. However, with more model variations, there will be longer testing and qualification periods.

The system scales to larger capacities than most competitors, with a maximum raw capacity of 460TB when configured with 1.9TB SSDs. The array does not currently have full in-line deduplication and compression, but does exploit existing 3PAR zero block bit pattern matching and thin provisioning to improve storage efficiency. The array performs well in shared environments due to its mature multitenancy and quality of service (QoS) features. However, no file protocols are supported. Pricing of all data services is tied to the general-purpose array 3PAR model and is based on host and capacity, making it complex compared to new entrants. The 3PAR 7450 platform has an extensive and proven compatibility matrix and reliability track record that is supported with a six 9s (99.9999%) high-availability guarantee during the first 12 months.

Huawei OceanStor Series

Huawei entered the solid-state array market in 2011 with the launch of the entry-level array, OceanStor Dorado2100. Since then, Huawei launched the OceanStor Dorado5100, the second generation of the 2100, and most recently, the OceanStor 18800F. Huawei currently uses self-developed SSDs that use single-level cell (SLC) and eMLC modules. The product supports thin provisioning, copy-on-write snapshots, and asynchronous and synchronous replication services. Customers can obtain competitively priced SSA from Huawei because it has an aggressive sales approach, offering steep discounts off the list price for qualified enterprise customers. Its maintenance and support pricing (as a percent of capital expenditure [capex]) also tends to be lower than many others, which is backed by a large postsales support team that is highly concentrated in Asia.

Huawei’s R&D efforts in the past have been focused more on the hardware layer and only recently have software features started getting the attention that they deserve. Huawei’s products currently lack data reduction features such as deduplication and compression. Firmware upgrades are disruptive, and native encryption support is lacking in the array. However, Huawei provides performance transparency via SPC benchmarks and it is one of the few unified block and file SSAs.

IBM FlashSystem V840

The FlashSystem family consists of the older 700 series and the newer 800 series SSA, and all models only support block protocols. The FlashSystem 840 has more connection options with QDR InfiniBand in addition to FC, FCoE and iSCSI connections. Alternatively, the FlashSystem V840, which adds in IBM’s SVC, can scale to 320TB due to the internal FlashSystem Control Enclosure, and it inherits the broad SVC compatibility matrix but only supports FC protocols. Similarly, the V840 has richer data services in terms of QoS, compression, thin provisioning, snapshots and replication features, whereas the 840 lacks these. The 840 is designed to be a simple performance-oriented point product, whereas the V840 is for more general-purpose deployments. Both, however, lack deduplication. Additional features are provided by IBM’s SVC product, FlashSystem Control Enclosure, which has a simple-to-learn-and-operate administrative GUI.

The addition of control enclosures with the V840 increases the number of separate products and components that come with the SVC layer, which reduces performance when using real-time compression compared to the 840. The SVC control enclosure layer increases product complexity, as it has separate software levels that need to be maintained and tested across IBM storage product families. In V840-based configurations, customers need to administer and operate two devices: (1) the control enclosure; and (2) the storage enclosure, which also increases system complexity, product upgrades and problem determination.

Kaminario K2

The K2 SSA is in its fifth generation and is a testament to the product’s resiliency and flexible architectural approach, belying its original heritage as a DRAM appliance. Considering its genesis, K2 prides itself on its strong performance, which has been publicly scrutinized and has been verified via the Storage Performance Council, specifically the SPC-1 and SPC-2 benchmarks. In its latest generation, Kaminario also has added scale-up to its scale-out architecture, along with a more comprehensive suite of data management services. The array only supports FC and iSCSI block protocols and cMLC SATA SSDs, along with storage efficiency features that enable 30TB per rack unit that can extend both up and then out.

Its latest generation now features in-line compression, and selectable in-line deduplication and thin provisioning features. Pricing follows the customer-oriented approach of all options being inclusive in the base array price, and boasts a guaranteed effective capacity average price of $2 per GB, lending credence to storage efficiency claims. The product is reinforced with a seven-year unconditional warranty on SSD endurance, which negates customers’ concerns with SSD reliability. However, replication is not yet available, and QoS performance features are limited, albeit the system does have sufficient reporting capabilities. Resiliency features and a scale-out design with very good nondisruptive software and system firmware upgrade features indicate flexibility and scalability as a fundamental requirement for its storage efficiency.

NetApp EF Series

NetApp’s EF Series is an all-SSD version of the E-Series, a product line that NetApp inherited as part of the Engenio acquisition. There are two models in the EF product line — the EF540, which was launched in early 2013, and the EF550, which was launched in late 2013 with an SSD hardware refresh. The EF Series runs the SANtricity operating system and has its own management GUI. The product supports FC, iSCSI and InfiniBand. NetApp has made changes to the software to monitor SSD wear life and recently expanded the scalable raw capacity to 192TB. The EF Series product doesn’t support any data reduction features. Existing NetApp OnCommand suite customers cite the need for improvement in the SANtricity management console. Given the focus of the EF Series on high-bandwidth workloads, InfiniBand has been a prominent interface, but now FC implementations have become the predominant interface as end-user acceptance of the product broadened. The long-term viability of the EF Series as a product line will remain in question, with NetApp’s all-new FlashRay set for launch toward the end of the year with potentially better data services and manageability.

Nimbus Data Gemini

Nimbus has developed a purpose-built unified array from the ground up, and it features the broadest protocol support in the industry. The Gemini array is versatile due to its Halo OS, which is the epicenter for its data services and multiprotocol support, most notably including InfiniBand. The Halo operating system also offers a wide suite of data services. However, client feedback on the full depth of capability has been mixed and requires further diligence, especially the flexible and selectable data reduction features that we recommend should be verified in a proof of concept.

These arrays are cost-effective, given the use of advanced cMLC NAND designed directly into hot-swappable modules for the array. The usage of cMLC NAND and parallel memory architecture delivers resiliency across a wide spectrum of capacities ranging from 3TB to 48TB raw capacity. This approach not only provides considerable density in the 2U enclosures, but also has efficient power consumption, making it attractive from an opex perspective. The recent availability of the scale-out Gemini X-series has redundant directors that enable 10 nodes, reaching 960TB raw capacity. While the user interface and manageability is adequate, QoS features will need to evolve for this scale-out architecture.

Pure Storage FA Series

Pure Storage has focused on creating a purpose-built SSD-optimized storage array and controller software, which uses low-cost/low-capacity PC SSD cMLC media. Pure Storage is on its third-generation product, built on a foundation of granular block in-line deduplication and compression at a 512-byte level that allows compelling data reduction in workloads with various block sizes. Pure Storage has a good reputation for reliability, ease of use and extensive storage data services that now feature asynchronous replication. Overall, the arrays have relatively low raw capacities. The second-generation arrays, such as the 35TB FA-420, and the current, third-generation, more competitive 70TB FA-450 are based on Dell hardware. Only FC and iSCSI block protocols are supported with 16 Gbps FC on the FA-450 and the slower 8 Gbps FC on the FA-420. Traditional QoS features are not available, but consistent performance is provided by internal timing that skews I/Os toward the better-performing SSD. All data services are included in the base price of the array, plus product satisfaction guarantees are provided and controller investment protection is offered via the Forever Flash program if support and maintenance contracts are maintained.

Skyera skyHawk

Skyera designs its own controllers and software, and has developed its own wear-leveling algorithms to enhance and improve cMLC NAND reliability. The skyHawk is a relatively new entry-level iSCSI and NFS SSA with the extensive data services, but only a single power supply and controller. The dense packaging and exploitation of the most advanced consumer MLC NAND technology enables the product to be the most aggressively priced on a usable capacity basis, with prices starting at under $3 per GB formatted capacity before data reduction. Combined with in-line compression software, which is done only in hardware and not at the system level, this increases usable capacity. Array reporting is oriented toward internal metrics such as logical unit number (LUN) input/output operations per second (IOPS), bandwidth and latency. skyHawk also offers sophisticated array partitioning QoS features.

Firmware upgrades require a reboot and therefore an outage. Overall, from a hardware and single point of failure, this is not an enterprise data center array unless two or more Skyera arrays are mirrored or striped using a higher abstraction layer. Skyera is working with partners such as DataCore Software to provide integration certification to mitigate some of the platform compatibility and high-availability challenges. A dual-controller skyEagle array is in development, featuring 326TB of raw capacity in 1U, but given product delays with skyHawk, the already-announced skyEagle will also be subject to delays as well.

SolidFire SF Series

SolidFire sells scale-out solid-state arrays with a primary focus on service providers and large-enterprise customers. By leveraging external cMLC-based PC SSDs with in-line data reduction features, SolidFire is able to deliver competitive price/performance in a scale-out architecture. SolidFire’s product is differentiated in the marketplace through QoS, where applications are delivered with guaranteed IOPS. SolidFire’s QoS feature provides the ability to set minimum, maximum and burst performance settings, which enables enterprises and service providers to offer differentiated services. The product has close integration with common hypervisors, and the REST-based API support is commendable for a young company. The vendor offers broad support for cloud management platforms such as OpenStack, CloudStack and VMware, as well as support for public cloud APIs such as S3 and Swift. The product relies on a distributed replication algorithm rather than redundant array of independent disks (RAID) for data protection, which reduces rebuild times and creates a self-healing infrastructure.

Focus on enterprise private clouds and integration with traditional applications is fairly nascent and needs further development. Most current deployments are iSCSI-based, with FC support being only recently introduced.

Violin Memory 6000 Series All Flash Array

Violin has its own unique architecture based on its NAND chip-level expertise that it uses in its own Peripheral Component Interconnect Express (PCIe)-based memory module configurations that are organized and aggregated to enable a relatively dense array with strong performance and guaranteed sub-500-μs latency. Violin is one of the most cost-effective vendors on a raw $ per GB basis due to its usage of cMLC technology at advanced process geometries, allowing raw capacity up to 70TB in 3U and scale up to 280TB. Violin has strong block and file support and good ecosystem interoperability.

Violin has only recently introduced (in its June Concerto 7000 announcement) a more cohesive suite of data management features that can be upgraded from an existing 6000 series with a service disruption. The Concerto enhancements provide greater business continuity via remote asynchronous and synchronous replication along with mirroring and clones. Although Violin did introduce thin provisioning and snapshots, data reduction was only recently introduced on August 19 (and is not considered in the ratings). In June 2014, Violin also announced its Windows Flash Array, featuring tight integration with Microsoft Windows protocols and services that include data reduction, but this was not included in the rating. Pricing for data services is not fully inclusive and will require additional charges for certain features such as mirroring and replication.


HDD-based general-purpose storage arrays have stagnated in performance compared to the order of magnitude of performance improvement of CPUs within servers. SSAs, which use SSDs instead of HDD, have addressed this performance imbalance by improving storage IOPS and latency performance by an order of magnitude or sometimes two orders of magnitude. While SSDs themselves are not new and have been available for decades, SSAs are new external storage offerings, which have been specifically designed or marketed to exploit the reduced cost and improved performance of NAND SSD. Latency or response time is what customers are mainly concerned with, but bandwidth or throughput is also improved by SSA. The reduced latency has also enabled new technologies such as in-line primary data reduction, deduplication, compression or both. These features were restricted by the mechanical constraints of HDDs. The reduced environmental requirements of SSAs such as power and cooling also have incidental and important advantages over general-purpose arrays and other HDD-based storage systems.

Product/Service Class Definition

The following description and criteria classify solid-state array architectures by their externally visible characteristics rather than vendor claims or other nonproduct criteria that may be influenced by fads in the solid-state array storage market.

Solid-State Array

The SSA category is a new subcategory of the broader external controller-based (ECB) storage market. SSAs are scalable, dedicated, solutions based solely on solid-state semiconductor technology for data storage that cannot be configured with HDD technology at any time. The SSA category is distinct from SSD-only racks within ECB storage arrays. An SSA must be a stand-alone product denoted with a specific name and model number, which typically (but not always) includes an operating system and data management software optimized for solid-state technology. To be considered a solid-state array then, the storage software management layer should enable most, if not all, of the following benefits: high availability, enhanced-capacity efficiency (perhaps through thin provisioning, compression or data deduplication), data management, automated tiering within SSD technologies and, perhaps, other advanced software capabilities, such as application and OS-specific acceleration based on the unique workload requirements of the data type being processed.

Scale-Up Architectures

  • Front-end connectivity, internal bandwidth and back-end capacity scale independently of each other.
  • Logical volumes, files or objects are fragmented and spread across user-defined collections such as solid-state pools, groups or RAID sets.
  • Capacity, performance and throughput are limited by physical packaging constraints, such as the number of slots in a backplane and/or interconnected constraints.

Scale-Out Architectures

  • Capacity, performance, throughput and connectivity scale with the number of nodes in the system.
  • Logical volumes, files or objects are fragmented and spread across multiple storage nodes to protect against hardware failures and improve performance.
  • Scalability is limited by software and networking architectural constraints, not physical packaging or interconnect limitations.

Unified Architectures

  • These can simultaneously support one or more block, file and/or object protocols, such as FC, iSCSI, NFS, SMB (aka CIFS), FCoE and InfiniBand.
  • Both gateway and integrated data flow implementations are included.
  • These can be implemented as scale-up or scale-out arrays.

Gateway implementations provision block storage to gateways implementing NAS and object storage protocols. Gateway style implementations run separate NAS and SAN microcode loads on either virtualized or physical servers, and consequently, have different thin provisioning, auto-tiering, snapshot and remote copy features that are not interoperable. By contrast, integrated or unified storage implementations use the same primitives independent of protocol that enables them to create snapshots that span both SAN and NAS storage and dynamically allocate server cycles, bandwidth and cache based on QoS algorithms and/or policies.

Mapping the strengths and weaknesses of these different storage architectures to various use cases should begin with an overview of each architecture’s strengths and weakness and an understanding of workload requirements (see Table 1).

Table 1. Solid-State Array Architecture
Strengths Weaknesses
  • Mature architectures:
    • Reliable
    • Cost-competitive
  • Large ecosystems
  • Independently upgrade:
    • Host connections
    • Back-end capacity
  • May offer shorter recovery point objectives RPOs over asynchronous distances
  • Performance and bandwidth do not scale with capacity
  • Limited compute power can make a high impact
  • Electronics failures and microcode updates may be high-impact events
  • IOPS and GB/sec scale with capacity
  • Nondisruptive load balancing
  • Greater fault tolerance than scale-up architectures
  • Use of commodity components
  • There are high electronics costs relative to back-end storage costs.
  • Maximal deployment flexibility
  • Comprehensive storage efficiency features
  • Performance may vary by protocol (block versus file).

Source: Gartner (August 2014)

Critical Capabilities Definition


This refers to the ability of the platform to support multiple protocols, operating systems, third-party ISV applications, APIs and multivendor hypervisors.


This refers to the automation, management, monitoring, and reporting tools and programs supported by the platform.

These tools and programs can include single-pane management consoles, monitoring and reporting tools designed to help support personnel to seamlessly manage systems, and monitor system usage and efficiencies. They can also be used to anticipate and correct system alarms and fault conditions before or soon after they occur.

Multitenancy and Security

This refers to the ability of a storage system to support a diverse variety of workloads, isolate workloads from each other, and provide user access controls and auditing capabilities that log changes to the system configuration.


This is the collective term that is often used to describe IOPS, bandwidth (MB/second) and response times (milliseconds per I/O) that are visible to attached servers.


Reliability, availability and serviceability (RAS) refers to a design philosophy that consistently delivers high availability by building systems with reliable components and “de-rating” components to increase their mean times between failures (MTBFs).

Systems are designed to tolerate marginal components, hardware and microcode designs that minimize the number of critical failure modes in the system, serviceability features that enable nondisruptive microcode updates, diagnostics that minimize human errors when troubleshooting the system, and nondisruptive repair activities. User-visible features can include tolerance of multiple disk and/or node failures, fault isolation techniques, built-in protection against data corruption, and other techniques (such as snapshots and replication; see Note 1) to meet customers’ recovery point objectives (RPO) and recovery time objectives (RTO).


This refers to the ability of the storage system to grow not just capacity, but performance and host connectivity. The concept of usable scalability links capacity growth and system performance to SLAs and application needs (see Note 2).

Storage Efficiency

This refers to the ability of the platform to support storage efficiency technologies, such as compression, deduplication and thin provisioning, to improve utilization rates while reducing storage acquisition and ownership costs.

Use Cases


This is an average of the following use cases. Please refer to Table 2 for the weightings of the use cases.

Online Transaction Processing

This use case is closely affiliated with business-critical applications, such as database management systems (DBMSs).

DBMSs require 24/7 availability and subsecond transaction response times — hence, the greatest emphasis is on performance and RAS features. Manageability and storage efficiency are important because they enable the storage system to scale with data growth while staying within budget constraints.

Server Virtualization

This use case encompasses business-critical applications, back-office and batch workloads, and development.

The need to deliver low I/O response times to large numbers of virtual machines or desktops that generate cache-unfriendly workloads, while providing 24/7 availability, heavily weights performance and storage efficiency, followed closely by RAS.

High-Performance Computing

High-performance computing (HPC) clusters can be made of large numbers of servers and storage arrays, which together deliver high compute densities and aggregated throughput.

Commercial HPC environments are characterized by the need for high throughput and parallel read-and-write access to large volumes of data. Performance, scalability and RAS are important considerations for this use case.


This use case applies to all analytic applications that are packaged or provide business intelligence (BI) capabilities for a particular domain or business problem.

It does not apply to only storage consumed by big data applications using map/reduce technologies (see definition in “Hype Cycle for Advanced Analytics and Data Science, 2014″).

Virtual Desktop Infrastructure

Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server.

VDI is a variation on the client/server computing model, sometimes referred to as server-based computing. Performance and storage efficiency (in-line data reduction) features are heavily weighed for this use case for which solid-state arrays are emerging as a popular alternative.

Inclusion Criteria

  • It must be a self-contained, SSD-only system that has a dedicated model name and model number.
  • The SSD-only system must be exactly that. It must be initially sold with 100% SSD and cannot be reconfigured, expanded or upgraded at any future point in time with any form of HDDs within expansion trays via any vendor special upgrade or specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.
  • The vendor must sell its product as stand-alone product, without the requirement to bundle it with other vendors’ storage products in order for the product to be implemented in production.
  • Vendors must be able to provide at least five references to Gartner that can be successfully interviewed by Gartner. At least one reference must be provided from each geographic market (Asia/Pacific, EMEA and North American) or the two within which the vendor has a presence.
  • The vendor must provide an enterprise-class support and maintenance service, offering 24/7 customer support (including phone support). This can be provided via other service organizations or channel partners.
  • The company must have established notable market presence, as demonstrated by the amount of terabytes sold, the number of clients or significant revenue.
  • The product and a service capability must be available in at least two of the following three markets — Asia/Pacific, EMEA and North American — by either direct or channel sales.

The solid-state arrays evaluated in this research include scale-up, scale-out and unified storage architectures. Because these arrays have different availability characteristics, performance profiles, scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions against operational needs, planned new application deployments, forecast growth rates and asset management strategies.

Although this SSA critical capabilities research represents vendors whose dedicated systems meet our inclusion criteria, ultimately, it is the application workload that governs which solutions should be considered, regardless of any criteria. The following vendors and products were considered for this research but did not meet the inclusion criteria, despite offering SSD-only configuration options to existing products. The following vendors may still warrant investigation based on application workload needs for their SSD-only offerings: American Megatrends, Dell Compellent, EMC VMAX, Fusion-io ION (recently acquired by SanDisk), Fujitsu Eternus DX200F, Hitachi Unified Storage VM, IBM DS8000, NetApp FAS, Oracle ZFS and Tegile T-series.

Table 2. Weighting for Critical Capabilities in Use Cases
Critical Capabilities Overall Online Transaction Processing Server Virtualization High-Performance Computing Analytics Virtual Desktop Infrastructure
Performance 29.0% 30.0% 20.0% 42.0% 25.0% 30.0%
Storage Efficiency 16.0% 15.0% 20.0% 5.0% 15.0% 25.0%
RAS 17.0% 20.0% 15.0% 15.0% 20.0% 15.0%
Scalability 11.0% 8.0% 10.0% 15.0% 18.0% 4.0%
Ecosystem 7.0% 7.0% 10.0% 3.0% 5.0% 8.0%
Multitenancy and Security 6.0% 5.0% 5.0% 10.0% 6.0% 5.0%
Manageability 14.0% 15.0% 20.0% 10.0% 11.0% 13.0%
Total 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%
As of August 2014

Source: Gartner (August 2014)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

Each product or service that meets our inclusion criteria has been evaluated on several critical capabilities on a scale from 1.0 (lowest ranking) to 5.0 (highest ranking). Ratings are listed in Table 3, below.

Table 3. Product/Service Rating on Critical Capabilities
Product or Service Ratings HP 3PAR StoreServ 7450 Violin Memory 6000 Series
All Flash Array
Huawei OceanStor Series NetApp EF Series EMC VNX-F Pure Storage FA Series Nimbus Data Gemini IBM FlashSystem V840 Cisco UCS Invicta Series Kaminario K2 SolidFire SF Series Skyera skyHawk EMC XtremIO
Performance 3.1 3.7 3.1 3.2 3.2 3.3 3.4 3.5 3.1 3.7 3.3 3.3 3.6
Storage Efficiency 3.2 2.8 1.9 2.1 2.5 4.2 3.3 3.2 2.8 3.4 3.7 2.9 3.4
RAS 3.5 3.2 3.1 3.1 3.2 3.4 3.4 3.4 2.9 3.4 3.4 2.4 3.1
Scalability 3.5 3.4 2.8 3.1 3.2 2.8 3.2 3.1 3.0 3.4 4.0 3.2 3.6
Ecosystem 3.8 3.1 2.7 2.6 3.9 3.2 3.0 3.3 2.6 3.0 2.9 2.5 3.0
Multitenancy and Security 3.6 2.9 2.6 3.0 3.2 3.3 3.2 3.2 2.6 2.9 3.5 2.6 3.0
Manageability 3.2 2.8 2.7 3.1 3.0 3.4 2.9 2.9 2.4 2.9 3.2 2.4 3.0
As of August 2014

Source: Gartner (August 2014)

Table 4 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.

Table 4. Product Score in Use Cases
Use Cases HP 3PAR StoreServ 7450 Violin Memory 6000 Series All Flash Array Huawei OceanStor Series NetApp EF Series EMC VNX-F Pure Storage FA Series Nimbus Data Gemini IBM FlashSystem V840 Cisco UCS Invicta Series Kaminario K2 SolidFire SF Series Skyera skyHawk EMC XtremIO
Overall 3.32 3.22 2.76 2.93 3.11 3.41 3.25 3.28 2.84 3.36 3.43 2.85 3.32
Online Transaction Processing 3.32 3.22 2.78 2.94 3.11 3.42 3.26 3.28 2.84 3.36 3.40 2.83 3.31
Server Virtualization 3.34 3.14 2.69 2.87 3.09 3.46 3.21 3.23 2.79 3.30 3.42 2.78 3.28
High-Performance Computing 3.31 3.35 2.89 3.07 3.17 3.29 3.28 3.31 2.91 3.41 3.44 2.95 3.38
Analytics 3.34 3.23 2.77 2.94 3.11 3.37 3.26 3.27 2.87 3.37 3.49 2.86 3.34
Virtual Desktop Infrastructure 3.30 3.18 2.68 2.84 3.06 3.53 3.26 3.29 2.84 3.37 3.41 2.85 3.32
As of August 2014

Source: Gartner (August 2014)