Life Beyond Hyperconverged and more...


Life Beyond Hyperconverged

Wasp_nestWell, I certainly gave the hornet’s nest a good, healthy smack with my recent post (“Ten Reasons Why VMware Is Leading The Hyperconverged Market”).  

Never underestimate the power of a well-written blog post to shake things up :)

In addition to hearing from dozens of enraged Nutanix employees, the usual round of pundits are now weighing in with their perspectives as well.  Everyone is entitled to their opinions, and there seems to be no shortage of those.  

Although I do find it interesting that no one has yet attempted to refute any of the facts behind the ten arguments I presented.  That's typical in these situations: lots of passionate emotions, very little discussion of the underlying facts.

The good news: I did have a chance to have a few great conversations with intelligent, non-partisan folks who said they were thinking differently after reading my thoughts.

Thinking differently is always a good thing :)

BridgeThe observation: most strategic technologies in IT put their users on a pathway to something bigger and better.  That’s what makes them strategic, no?

The basic question: if hyperconverged is truly “strategic”, what bigger and better world should it lead to?   Or, is it an end unto itself in its present form, and thus not strategic?

Clearly, this is a great question, and worth a blog post or two to discuss …

The Fuss Over Hyperconverged

THE_HYPE_MAGAZINESome times new buzz-worthy IT concepts resonate and endure, sometimes they don’t. This one appears to be doing quite well at present, and I think I understand why that is.

The first wave of hyperconverged was presented as an appliance that didn’t need an external storage array.  The value proposition was heavily weighted towards “convenient consumption”.

Bring it in, rack it up, connect the network and presto!  Given the typical complex state of affairs in standing up IT infrastructure, it seems almost as easy as calling up your service provider and having them fire up a few new instances on your behalf.  And I’ve learned to never underestimate the appeal of convenient consumption.  VMware's EVO:RAIL hyperconverged offering clearly targets this model.

Now_and_laterI am arguing that we’re moving into the next wave of how people see hyperconverged.  

Yes, there are still IT shops that certainly prefer that “convenient consumption” benefit, but I see a growing number now view the potential to do more with the technology: both now and in the future.

But, as a result, the criteria changes in their mind — less weighted towards “the box” and immediate gratification, and more weighted towards “the strategy” of how their short-term choices play into the broader evolution of the IT landscape.

How Strategic Technology Usually Plays Out In Enterprise IT

StrategyAfter decades of being an armchair observer of enterprise IT adoption patterns, I’d argue that a common pattern is the “two-fer”.  As in "two for the price of one".

The new technology is brought in to ostensibly solve an immediate short-term requirement with obvious justification.  But at the same time, there is full awareness that this same technology has the potential to play a broader and more transformative role changing the way things are done in IT.

A familiar example?

VMware virtualization got its start by solving an immediate data center problem: rampant server overprovisioning.  The pitch at the time was deadly simple: save money with virtualization.  However, over time, people realized that — once virtualized — vSphere could fundamentally change the way IT was done from an operational perspective: provisioning, management and more.  And that was a really big deal.

The answer to a tactical problem built the foundation for a great strategic outcome.  And it wasn't dumb luck on the part of IT shops, either.  They saw what we saw.

Another example from the storage world?

Flash_storageWhen flash was introduced, it was seen as the solution to a very narrow but very demanding set of workloads, e.g. databases with very high transactional rates — a tactical solution to a specific pain point.

As prices dropped, many shops have now decided on a ‘flash first’ strategy — use it just about anywhere that performance could potentially be an issue.  

The result was that users got spectacular performance, and IT could get out of the storage-performance-problem-resolution business — arguably transformative in its own way.

Words Fail Me

So if we’re going to think of hyperconverged as one of these “two-fers” (tactical today, strategic tomorrow) what does the longer term picture look like?

At VMware, we describe future state data center architectures as “software defined”, e.g. the software-defined data center or SDDC.  Other labels also get used around similar concepts: devops, cloud, software-define infrastructure, etc.  

Software-defined.DC (3)I’m not here to debate labels, though.

Why?  The core technology ideas behind each are similar: heavy use of commodity technologies, everything programmable and thus able to be automated by software, driven by application-centric policies, dynamic and flexible, ease of consumption for end users, an enterprise-class operational model, etc.

Here’s the observation: with this perspective, well-considered hyperconverged solutions can easily deliver a “two-fer” for enterprise IT.

The tactical problem they solve is a cost-effective solution for an immediate infrastructure requirement.  The strategic benefit is that they can potentially create is a pathway to SDDC or whatever you’d prefer to call your next-generation environment.

But if we’re going to want to exploit that second part, our evaluation criteria may have to evolve.

Getting To SDDC

If you’ve ever sat down with a customer responsible for a large, complex enterprise IT environment, pitching the attractiveness of something like SDDC isn’t hard.  On paper, it’s not hard to get agreement that it’s a great future vision of how IT ought to work.

The fun part starts in putting in place a realistic plan to get there.

Moving_partsNot surprisingly, there are a *lot* of moving parts that are highly resistant to change.  Legacy investments and legacy vendors.  Operational models and processes that have existed for perhaps decades.  Entrenched organizations complete with factions, tribes and internal politics.  

And, of course, precious little time between firefighting episodes to actually work on anything.

Much as we’d like to believe that all it takes is magic software and a quick implementation plan, the reality is usually quite different.

Can Hyperconverged Be A Short-Cut To A Better Place?

Let’s say you’d like to introduce SDDC-like concepts quickly into your data center, but do so with a minimum of cost, hassle and inevitable organizational impact.  It’d be hard to imagine an easier or more powerful way of doing so than standing up a modest vSphere+VSAN hyperconverged environment.  Or maybe an EVO:RAIL if you're looking for something even simpler.  Same basic software technology in both.

You’d get the very best in hyperconverged software technology: vSphere, VSAN, vRealize Automation Suite, vRealize Operations, NSX, etc. etc.   You’d have vSphere admins on your staff who didn’t need a long learning curve.   You’d already have support relationships in place, etc.

You could evaluate for yourself — and quickly — what the new technology can offer.  With almost no downside.

Aircraft_carrierAnd, here’s the best part: if you decided you liked what you saw, the entire environment could be seen as a working scale model of what you’d eventually like to build:

- proven functionality, processes and tools that can scale to truly large enterprises
- the ability to accommodate and leverage existing infrastructure choices (servers, storage, network)
- the ability to extend in new directions, like OpenStack, containers or whatever new thing comes along next that looks attractive
- and with well-supported interoperability between all essential components

Let’s Turn It Around

OK, so I get involved in a lot of VSAN sales calls.  Let me share something interesting that I’m finding as a result.

Right now, it’s breaking 50/50 between people who are clearly looking for a tactical solution, and people who clearly want to take the first step towards a strategic outcome along the lines I’ve described.

HorizonYou’ll hear things like “we want to change the way we do IT”. Or “we’re trying to introduce a disruptive model into our environment”. Or perhaps “this is part of our cloud vision”.

Yes, there are the inevitable feature/function questions as you’d expect.  But it’s clear to this crowd that they’re not just looking for a like-for-like replacement for a traditional storage array.  No, they see something much more attractive just over the horizon.

They probably don’t use the word “hyperconverged” to describe what their first step is.  I think that’s partly due to the fact that the term has been unfairly chained to a specific piece of hardware and associated consumption, and that’s not what they’re ultimately after.  

They’re looking for a software model that fundamentally changes the way IT is done.

That being said, some of this group are still interested in the ease-of-consumption that comes with an appliance model such as EVO:RAIL — but it’s clear that’s not their desired end state.

Where Does That Leave Us?

ShortcutI think one core question remains — how will the marketplace come to define “hyperconverged”?

Will it continue to be associated with a specific vendor appliance, focusing on ease-of-consumption?

Or will more people realize that hyperconverged is essentially software, representative of a large-scale design pattern made easy to consume, and thus can be an attractive short-cut to a much nicer place?

I know which outcome I’m betting on.

-------------

Like this post?  Why not subscribe via email?

 

 

 


Going To EMC World? We'd Like To Have A Chat!

It's that time of year again -- EMC World in Las Vegas May 4-7.  Only a month away!

Emcworld2015In my world, EMC World is absolutely the best show to talk storage in all its different aspects.  You meet some pretty amazing people there.

For the last two years, we've held small non-disclosure sessions with select folks at the show.  

We share some of what we're working on, and we get incredibly valuable feedback on key issues we're debating internally.

We'd like to do it again ... if you're up for it!

So, here's the deal -- if you're way into virtualization and software-defined storage, maybe you'd like to join us?  Space is very limited, though.  Previous attendees to prior sessions get first priority.

This year, we'll be talking about:

  • the current VSAN roadmap 2016 and beyond
  • new proposals on how we manage VSAN ReadyNodes and the VSAN HCL
  • a new software-defined model for data protection and management
  • plans for deeper integration with vCloud Air
  • and a discussion around "cloud native" applications, time permitting

We'll be holding three sessions in a suite at the Venetian, near by:

  • Monday, May 4th 3-5 pm (partners only, please)
  • Tuesday, May 5th 3-5 pm (end users and partners)
  • Wednesday, May 6th 3-5 pm (end users and partners)

If this sounds like something you'd be interested, please drop me an email at chollis@vmware.com  In your email, any information you could provide as to what you do, why you're interested, etc. would be very helpful.

Thanks!

-- Chuck

      

Ten Reasons Why VMware Is Leading The Hyperconverged Industry

PillowThose of you who have followed me over the years know that I’m not shy when it comes to a good competitive dust-up.  I’m OK with the usual puffery and slightly exaggerated claims.  All part of the fun.

I’m not OK when I believe the claims are misleading.

One startup is working very hard to convince everyone that they (and they alone) are leading the current trend in HCI — hyperconverged infrastructure.  One of their spokespeople even published a thoughtful piece listing the ten reasons why they thought they deserved the “leader” mantle.

While I admire their bravado, I felt the piece did a disservice to both the industry and to customers.  I thought it grossly misrepresented both the current and future state of the market.  

Perhaps most importantly, there was little talk about what mattered most to customers.

So — while staying positive — I’d like to share my "ten reasons" why I think VMware is leading — and will continue to lead — the hyperconverged marketplace.

Why Hyperconverged?

Convergence2If we’re going to have a polite argument, we ought to at least define what we’re discussing.

The first wave was “converged” infrastructure: traditional compute, storage and network products engineered to be consumed and supported as a single block of infrastructure.  

The fundamental appeal was a drastically simplified customer experience, which gave IT time and resources to go do other things.  VCE Vblocks established the market and validated the model, with several others following suit.  As we stand today, converged infrastructure is a successful and proven model that continues to grow.

Meanwhile, a few enterprising startups created a software-based storage layer that eliminated the need for an external storage array, and dubbed themselves “hyperconverged”.    

Hence our discussion today ...

#1 — Hyperconverged Is About Software, Not Hardware

Hyperconverged solutions derive their value from the hypervisor being able to support all infrastructure functions in software, and without the need for separate dedicated hardware, such as a storage array or fibre channel switch.  All players in this segment would mostly agree with this statement.

If hyperconverged is really about software (and not hardware), what’s the core software technology in just about every hyperconverged product available today?

VSphereVMware vSphere

It's ubiquitous in the data center, which explains why it's ubiquitous in the hyperconverged market.  A key part of the story: vSphere implements the most popular infrastructure mgmt APIs in use today.

The harsh market reality is that there’s just not a lot of demand for non-vSphere-based hyperconverged solutions.

IT professionals know vSphere -- it's tried, tested and proven -- and that's what they want.

If we could convince a few industry analysts to focus on hyperconverged software vs. counting largely similar boxes with different vendor labels on them, their picture of the landscape would be quite different.

As far as claims to "market leadership" — without the power and presence of the VMware vSphere platform, there wouldn't be a converged or hyperconverged market to argue about.

#2 — Built-In Is Better Than Bolted-On

If the value proposition of hyperconverged derives from integrating infrastructure in software, it’s reasonable to argue that deeper, structural integration will be more valuable than various software assemblages that lack this key attribute.

ManualsThere shouldn’t be a need for a separate management interface. 

There shouldn’t be a need for a separate storage layer that runs as a guest VM with dedicated resources, consuming precious resources and demanding attention. 

There shouldn’t be a need for multiple installation / patching / upgrade processes. 

There shouldn’t be a need to get support from two or more vendors.

And so on. 

Within VMware, we use the term “hypervisor converged” to differentiate this important architectural difference between built-in vs. bolted-on.  

I'll use vSphere + VSAN as my all-software example here.  One, simple integrated environment.  One management experience.  One upgrade process.  One source of support.

If our discussion of "market leadership" includes any notion of creating a more simple experience for users, I would argue that it’s hard to compete with features that are simple extensions of the hypervisor.

#3 — Having Lots Of Hardware Choices Is A Really Good Thing

If hyperconverged is really about software, why are many so paradoxically focused on “the box”?  It’s nothing more than a convenient consumption option for someone who wants a fast time-to-value over other considerations.

Choices_choicesIdeally, hyperconverged -- as a concept -- shouldn’t be welded to specific hardware.  

For those that want a convenient consumption option such as prefab appliance with a locked-down config, great!  That's certainly useful to a certain segment of the market.

But others might want a bit more flexibility, with a well-defined starting point.  Yet another useful option.

And for those that really want to roll their own, a list of what is supported, augmented by tools to help you design and size a config that's right for you.

There's a vast list of reasons why more hardware choice is a good thing ...

Maybe there’s an existing investment that’s already been made. 

Maybe there are requirements that aren’t satisfied well by the static configs available. 

Maybe you've got a great sourcing arrangement for servers.  

Maybe there’s a desire to use the latest-greatest technology, without waiting for an appliance vendor to offer it.  Etc. etc.

Whatever the reason, increased hardware choice makes hyperconverged more compelling and more attractive for more people.

EVO:RAIL currently has 9 qualified EVO:RAIL partners.  Virtual SAN (as part of vSphere) has dozens and dozens of ReadyNodes from server partners that can be ordered as a single SKU.  And for everyone else, there's an extended HCL that allows for literally millions of potential configurations - plus the tools to figure out what's right for you.

If market leadership includes any notion of hardware choice, VMware stands apart from the rest of the hyperconverged crowd.  Because, after all, it's software ...

#4 — There’s More To Enterprise IT Than Just Hyperconverged

WrongtoolYes, there’s that old joke that when all you have is a hammer, everything looks like a nail :)

But there’s a more serious consideration here: when it comes to even modestly-sized IT functions, hyperconverged is only one part of a broader landscape that needs to be managed. 

There’s inevitably a strong desire for common tools, processes and workflows that support the entire environment, and not just an isolated portion of it.

From an enterprise IT perspective, it's highly desirable to use the same operational framework for both virtualized and hyperconverged environments.

Going back to that controversial “market leadership” thing, how about the need for enterprise-scale management tools that aren’t limited to a single hyperconverged offering?

#5 — Customer Support Matters

If extreme simplicity is an integral part of the hyperconverged value proposition, customer support has to figure in prominently.

But there’s a structural problem here.

Not all of the hyperconverged appliance vendors have elected to be vSphere OEMs.  That means that they don’t have the right to distribute the VMware software used in their product.  It also means that they are not entitled to provide support for VMware software.

This arrangement has the potential to put their customers in an awkward position.

While I’m very sure all us vendors use our collective best efforts to support mutual customers, this state of affairs certainly isn’t ideal.  Since all of these vendors provide a critical storage software layer, it may not be obvious where a problem actually lies.  

GhostbustersLet’s say you have a performance problem with your hyperconverged appliance — who do you call? 

The appliance vendor?  VMware?  Ghostbusters?

When it comes to providing customer support, VMware is typically ranked at the top (or near the top) in customer satisfaction — even though there are always potential areas for improvement.   One call.

No argument: the customer support model and execution should factor into our notion of “market leadership”.

#6 — Useful Things Should Just Work

Most shops have gotten accustomed to using all the cool functionality in vSphere.  And, presumably, they’d like to continue doing the same in their hyperconverged environment.

But that’s not always the case.  Here’s one example ...

VSAN_example1You’re probably familiar with vSphere HA — a great feature that automatically restarts workloads on a surviving host if there’s a failure.

In a shared storage environment, vSphere HA uses the management network to ascertain the state of the cluster, and coordinate restarts if necessary.  HA assumes that external storage is always available, and all hosts can see essentially the same thing.

But what if there’s no external storage, and we’re using a distributed cluster storage layer?

While it’s true that many of the newer hyperconverged appliances set up their own logical network (primarily for storage traffic), you can see the potential problem: vSphere HA doesn’t know about the vendor's storage network, and vice versa.

Imagine if, for example, the storage network partitions and the management network doesn’t.  Or if they partition differently.  Sure, that’s not going to happen every day, but when it does — what exactly happens?

In the case of vSphere and VSAN, vSphere HA has been redesigned to use VSAN’s network, so there is zero chance of an inconsistent state between the two.

Let’s go for two examples, shall we?

Using vMotion to balance clusters is just about ubiquitous.  You’d like to be able to move things around without screwing up application performance due to slow storage.

VSAN-example2Well, one vendor’s attempt at “data locality” didn’t help so much.  Move a VM, and performance degrades due to a design decision they made.  Try and move it back, more degradation. 

So another cool and useful vSphere feature now has sharp edges on it.

Not to pile on, but let's consider maintenance mode.

VMware admins routinely want to put a host in maintenance mode to work on it.  All the workloads are conveniently moved to other servers, and nothing gets disrupted.  But in our hyperconverged world, there's now storage to be considered.

MaintmodeVSAN has an elegant solution as part of the standard vSphere maintenance mode workflow -- the administrator gets a choice as to what they'd like to do with the affected data, and proceeds.  

All other approaches require a separate workflow to detect and evacuate potentially affected data -- which creates not only a bit more complexity, but also that special opportunity to have a really bad day.

I’m sharing these annoying nits just to illustrate a point: a good hyperconverged environment should reasonably support the same everyday virtualization functionality and workflows you already use. 

And, hopefully, with a minimum of “gotcha!”

Let’s factor that into our notion of “market leadership” as well …

#7 — Don’t Forget About Networking …

If we’re *seriously* discussing hyperconverged software, we have to ultimately consider software-defined networking in addition to compute and storage.  

StoolOtherwise, our stool only has two legs :)

The ultimate goal should be to give customers the option of running all three infrastructure functions (compute, storage, network) as an integrated stack running on their hardware of choice.  

No, we’re not there yet today, but …

Converge server virtualization with both SDS and SDN, and the potential exists for even more efficiency, simplicity and effectiveness.  Not to mention, a whole new set of important security-related use cases, like micro segmentation.  

But to integrate SDN, you’ve got to have the technology asset.

Within the VMware world, that key asset is NSX.  And while no vendor can offer a seamless integration between the three disciplines today, VMware has a clear leg up in this direction.

Dig deep into VSAN internals, and you can see progress to date.  For example, VSAN works closely with NIOC to be a well-behaved citizen over shared 10Gb links.  More to come.

Should hyperconverged vendors who claim market leadership have a plan for SDN and security use cases?  I think so.

#8 — Is There A Compatible Hybrid Cloud Option?

VCloud-Air-GraphicNot all infrastructure wants to live in a traditional data center.  There are many good reasons to want an operationally compatible hybrid cloud option like vCloud Air: cost arbitrage, disaster recover, short-term demands, etc.

Ideally, customers could have access to a hyperconverged experience that works the same -- using the same tools and workflows -- whether the hardware assets are in the data center, in a public cloud, or ideally both using the same management tools, sharing behaviors, etc.

It’d be great if the industry pundits factored this into their definition of “market leadership”.  I’m not hopeful, though.

#9 — Is It Efficient?

EfficiencyOne of the big arguments in favor of virtualization and hyperconverged approaches is efficiency: doing more with less.

Not to belabor an old argument, but there’s a certain economic appeal to hyperconverged software that uses compute and memory resources judiciously.  The big motivator here for customers is better consolidation ratios for server and VDI farms.  Better consolidation ratios = less money spent.

A hyperconverged storage solution that demands a monster 32 GB VM and potentially a few dedicated physical cores on each and every server gets in the way of that.

#10 — Where Do You Go From Here?

I remember clearly a meeting with a customer who introduced the purpose of the meeting: “we’re here to decide what to buy for tomorrow’s legacy”.  I couldn't stop smiling :)

Where_do_we_goBut it's an interesting perspective: one that reflects that IT strategy is often the result of many tactical decisions made along the way.

At one level, I’ve met IT pros who have an immediate need, and want an immediate solution.  They want a handful of boxes racked up ASAP, and aren’t that concerned with what happens down the road. 

Trust me, I can fully appreciate that mindset.

But there are many other IT pros who see each and every technology decision as a stepping stone to bigger and better things.

There are over a half-million IT shops who have built their data center strategy around VMware and vSphere.  Every one of them already owns many of the key ingredients needed for a hyperconverged solution.

More importantly, they trust VMware to take them forward into the brave new world of IT. : virtualized, converged, hyperconverged, hybrid cloud and ultimately to a software-defined data center.

And that’s a promise we intend to keep.

-------------------

Like this post?  Why not subscribe via email?

     

Related Stories

 

Why I Think VSAN Is So Disruptive

Future-hereLooking for a great disruption story in enterprise IT tech?  I think what VSAN is doing to the established storage industry deserves to be a strong candidate.  

I've seen disruptions -- small and large -- come and go.  If you're into IT infrastructure, this is one worth watching.

A few years ago, I moved from EMC to VMware on the power of that prediction.  So far, it’s played out pretty much as I had hoped it would.  There’s now clearly a new dynamic in the ~$35B storage industry, and VMware’s Virtual SAN is very emblematic of the changes that are now afoot.

There’s a lot going on here, so it’s worth sharing.  In each case, you’ll see a long-held tenent around The Way Things Have Always Been Done clearly up for grabs.

See if you agree?

CHANGESI began this post by making a list of changes — deep, fundamental changes — that VSAN is starting to bring about in the storage world.  

To be clear, I’m not talking so much about specific technologies, or how this vendor stacks up against that other one.

I’m really far more interested in the big-picture changes around fundamental assumptions as to “how storage is done” in IT shops around the globe: how it's acquired, how it's consumed, how it's managed.

If you’re not familiar with Virtual SAN, here’s what you need to know: it’s storage software built into the vSphere hypervisor. It takes the flash and disk drives inside of servers, and turns them into a shared, resilient enterprise-grade storage service that’s fast as heck.  Along the way, it takes just about every assumption we've made about enterprise storage in the last 20 years and basically turns it on its head.

Storage Shouldn’t Have To Be About Big Boxes

TapesMost of today’s enterprise storage market is served by external storage arrays, essentially big, purpose-built hardware boxes running specialized software.  Very sophisticated, but at a cost.

If your organization needs a non-trivial amount of storage, you usually start by determining your requirements, evaluating vendors, selecting one, designing a specific configuration, putting your order in, taking delivery some time later, installing it and preparing it for use.

Big fun, right?

The fundamental act of simply making capacity ready to consume — from “I need” to “I have” — is usually a long, complex and often difficult process: best measured in months.  I think the most challenging part is that IT shops have to figure out what they need well before actual demand shows up.  Of course, this approach causes all manner of friction and inefficiency.

We’ve all just gotten used to it — that’s just the way it is, isn’t it?  Sort of like endlessly sitting in morning commute traffic.  We forget that there might be a better way.

The VSAN model is completely different.  Going from “I need” to “I have” can be measured in days — or sometimes less.

For starters, VSAN is software — you simply license the CPUs where you want to use it.  Or use it in evaluation mode for a while.  The licensing model is not capacity-based, which is quite refreshing.  That makes it as easy to consume as vSphere itself.

Press140825_SMCI_VMWorld_SFThe hardware beneath VSAN is entirely up to you, within reason.  Build a VSAN environment from hand-selected components if that’s your desire.  Grab a ReadyNode if you’re in a hurry.  Or go for something that’s packaged the ultimate in a simplified experience: EVO:RAIL.  Choice is good.

Depending on your hardware model, getting more storage capacity is about as simple as ordering some new parts for your servers.  Faster, easier, smaller chunks, less drama, etc.  No more big boxes.

Yes, there is a short learning curve the first time someone goes about putting together a VSAN hardware configuration (sorry!), but — after that — there’s not much to talk about.

There are some obvious and not-so-obvious consequences from this storage model.

Yes, people can save money (sometimes really big $$$) by going this way.  Parts is parts.  We’ve seen plenty of head-to-head quotes, and sometimes the differences are substantial.

But there’s more that should be considered …

Ns5700Consider, for example, that storage technologies are getting faster/better/cheaper all the time.  

Let’s say a cool new flash drive comes out — and it looks amazing.  Now, compare the time elapsed between getting that drive supported with VSAN, and getting it supported in the storage arrays you currently own.

There's a big difference in time-to-usability for any newer storage tech.  And that really matters to some people.

One customer told us he likes the “fungibility” of the VSAN approach, given that clusters seem to be coming and going a lot in his world.  He has an inventory of parts, and can quickly build a new cluster w/storage from his stash, tear down a cluster that isn’t being used for more parts, mix and match, etc.

Sort of like LEGOs.

Just try that with a traditional storage array.

More Performance (Or Capacity) Shouldn’t Mean A Bigger Box

We-re-gonna-need-a-bigger-boatA large part of storage performance comes down to the storage controllers inside the array: how many, how fast.  

Add more servers that drive more workload, and you’re often looking at the next-bigger box — and all the fun that entails: acquiring the new array, migrating all your applications, figuring out what to do with the old array, etc.  

Yuck. But that’s the way it’s always been, right?

VSAN works differently.

As you add servers to support more virtualized applications, at the same time you’re also adding the potential for more storage performance and capacity.  A maxed-out 64 node VSAN cluster can deliver ~7m cached 4K read IOPS.

Want more performance without adding more servers?  Just add another disk controller and disk group to your existing servers, or perhaps just bigger flash devices, and you’ll get one heck of a performance bump.

Without having to call your storage vendor :)

Storage Shouldn’t Need To Be Done By Storage Professionals

Storage_proI suppose an argument could be made about it being best to have your taxes done by tax professionals, but an awful lot of people seem to do just fine by using TurboTax software.

There certainly are parts of the storage landscape that are difficult and arcane — and that’s where you need storage professionals.  There are also an awful lot of places where a simple, easy-to-use solution will suffice quite nicely, and that’s what VSAN brings to the table.

With VSAN, storage just becomes part of what a vSphere administrator does day-to-day.  No special skills required.  Need a VM? Here you go: compute, network and storage.  Policies drive provisioning.  Nothing could really be simpler.

No real need to interact with a storage team — unless there’s something special going on.

Can't We All Just Work Together?

Any time you get a team greater than a handful of people, people split up into different roles. The classic pattern in enterprise IT infrastructure has a dedicated server team, a dedicated network team, a storage team, etc.

The vSphere admins are usually dependent on the others to do basic things like provision, troubleshoot, etc.  For some reason, I’ve observed particular friction between the virtualization team and the storage team.  As in people on both sides pulling their hair out.

Half_when_worlds_collide_styleB_NZ03029_LMany virtualization environments move quickly: spinning up new apps and workloads, reconfiguring things based on new requirements — every day (or every hour!) brings something new.  

That’s what virtualization is supposed to do — makes things far more flexible and liquid.

When that world bumps up against a traditional storage shop that thinks in terms of long planning horizons and careful change management — well, worlds collide.

With VSAN, vSphere admins can be self-sufficient for most of their day-to-day requirements.  No storage expertise required.  Of course, there will always be applications that can justify an external array, and the team that manages it.

It’s just that there will be less of that.

Storage Software Is Now Not Just Another Application

The idea of doing storage in software is not new.  The idea of building a rich storage subsystem into a hypervisor is new.  And, when you go looking, there are plenty of software storage products that run as an application, also known as a VSA or virtual storage appliance.

In this VSA world, your precious storage subsystem is now just another application.  It competes for memory and CPU like all other applications, but with one exception: when it gets slow, everything that uses it also gets slow.

We’re talking about storage, remember?

MonsterVMAnd the resource requirements needed to ensure adequate storage performance using a VSA approach can be considerable.  Very healthy amounts of RAM, lots of CPU.   Nom, nom -- a monster VM?  That approach makes your servers bigger, your virtualization consolidation ratios poorer, or both.

Once again, VSAN does things differently.  

Because it’s built into the hypervisor, its resource requirements are quite reasonable.  It doesn’t have to compete with other applications, because it isn’t a standalone application like a VSA is.  Your servers can be smaller, your virtualization consolidation ratios better — or both.

Why do I think this will change things going forward?

Because VSAN now establishes the baseline for what you should expect to get with your hypervisor.  Any vendor selling a VSA storage product as an add-on has to make a clear case as to why their storage thingie is better than what already comes built into vSphere.

Not only in justifying the extra price, but also the extra resources as well as the extra management complexity.  Clearly, there are cases where this can be done, but there aren’t as many as before.

And that’s going to put a lot of pressure on the vendors who use a VSA-based approach.

The Vendor Pecking Order Changes

The last wave of storage hardware vendors were all array manufacturers — they got all the attention. In this wave, the storage component vendors are finding some new love.

Flash-130611As a good example, the flash vendors such as SanDisk and Micron are starting to do a great job marketing directly  to VSAN customers.  Why?  A decent proportion of a VSAN config goes into flash, and how these devices perform affects the entire proposition.  

This new-found stardom is not lost on them — especially as we start with all-flash configurations.

At one time, there was a dogfight between FC HBA vendors who wanted to attach to all the SANs that were being built.  In this world, it’s the storage IO controller vendor.  Avago (formerly LSI) as well as some of their newer competitors are aware that there’s a new market forming here, and realizing they can reach end users directly vs. being buried in an OEM server configuration.

There’s A Lot Going On In Storage Right Now …

We’ve seen one shift already from disk to flash — that much is clear.  Interesting, but — at the end of the day, all we were really doing was replacing one kind of storage media with another.

VortexWhat I’m seeing now has the potential to be far more structural and significant.  Now up for grabs is the fundamental model of "how storage is done" in IT shops large and small.

An attractive alternative to the familiar big box arrays of yesterday.  

Storage being specified, acquired, consumed, delivered and managed by the virtualization team, with far less dependence on the traditional storage team.  

Storage being consumed far more conveniently than before.  

Storage software embedded in the hypervisor having strong architectural advantages over other approaches.  

Help-customers-winStorage being able to pick up all the advances in commodity-oriented server tech far faster than the array vendors.  

Component vendors becoming far more important than before.

And probably a few things I forgot as well :)

Yes, I work for VMware. And VSAN is my baby.

But there’s a reason I chose this gig — I thought VMware and VSAN were going to be responsible for a lot of healthy disruptive changes in the storage business.  Customers would win as a result.

And, so far, that’s been exactly the case.

-------------

Like this post?  Why not subscribe via email?

 

 

 

 


Considering The Next Wave Of Storage Automation

HotheadFrom the time enterprise data centers sprang into existence, we’ve had this burning desire to automate the heck out of them.

From early mainframe roots to today’s hybrid cloud, the compulsion never wanes to progressively automate each every aspect of operations.

The motivations have been compelling: use fewer people, faster responses, be more efficient, make outcomes more predictable, and make services resilient.

But the obstacles have also been considerable: both technological and operational.

With the arrival of vSphere 6.0, a nice chunk of new technology has been introduced to help automate perhaps the most difficult part of the data center – storage.

It's worth digging into these new storage automation features: why they are needed, how they work, and why they should be seriously considered.


Background

Automation_oldAutomating storage in enterprise data centers is most certainly not a new topic.

Heck, it's been around as least as long as I have, and that's a long time :)

Despite decades of effort by both vendors and enterprise IT users, effective storage automation still is an elusive goal for so many IT teams.

When I'm asked "why is this so darn hard?", here's what I point to:

  • Storage devices had very limited knowledge of applications: their requirements, and their data boundaries. Arrays had to be explicitly told what to do, when to do it and where it needed to be done.
     
  • Cross-vendor standards failed to emerge that facilitated basic communications between the application’s requirements and the storage array’s capabilities.

     
  • Storage arrays (and their vendors) present a storage-centric view of their operations, making it difficult for non-storage groups to easily request new services, and ascertain if end-to-end application requirements were being met.

Here's the message: the new storage capabilities available in vSphere 6.0 show strong progress towards addressing each of these long-standing challenges.

Towards Application Centricity

Application_centerData centers exist solely to deliver application services: capacity, performance, availability, security, etc.

To the extent that each aspect infrastructure can be made programmatically aware of individual application requirements, far better automation can be achieved.

However, when it comes to storage, there have been significant architectural challenges in achieving this.

The first challenge is that applications themselves typically don’t provide specific instructions on their individual infrastructure requirements.  And asking application developers to take on this responsibility can lead to all sorts of unwanted outcomes.

At a high level, what is needed is a convenient place to specify application policies that can be bound to individual applications, instruct the infrastructure as to what is required, and be conveniently changed when needed.

Whatisvirt21The argument is simple: the hypervisor is in a uniquely privileged position to play this role. It not only hosts all application logic, but abstracts that application from all of infrastructure: compute, network and storage.

While these policy concepts have been in vSphere for a while, in vSphere 6.0 a new layer of storage policy based management (SPBM) is introduced. This enables administrators to describe specific storage policies, associate them with groups of applications, and change them if needed.

But more is needed here.

Storage_containersHistorically, storage containers have not aligned with application boundaries.  External storage arrays have historically presented LUNs or file systems - large chunks of storage shared by many applications.

Storage services (capacity, performance, protection, etc.) were specified at the large container level, with no awareness of individual application boundaries.

This mismatch has resulted in both increased operational effort and reduced efficiency.

Application and infrastructure teams need to go continually back and forth with the storage team regarding application requirements. And storage teams are forced to compromise by creating storage service buckets specified in excess of what is actually required by applications.  Better to err on the side of safety, right?

VVOLs-ArchNo longer. vSphere 6.0 introduces a new storage container – Virtual Volumes, or VVOLs – that precisely aligns application boundaries and the storage containers they use. Storage services can now be specified on a per-application, per-container basis.

We now have two key pieces of the puzzle: the ability to conveniently specify per-application storage policy (as part of overall application requirements), and the ability to create individualized storage containers that can precisely deliver the requested services without affecting other applications.

So far, so good.

Solving The Standards Problem

Periodically, the storage industry attempts to define meaningful, cross-vendor standards that facilitate external control of storage arrays. However, practical success has been difficult to come by.

Standards2Every storage product speaks a language of one: not only in the exact set of APIs it supports, but how it assigns meaning to specific requests, and communicates results.  Standard definitions what exactly a snap means, for example, are hard to come by.

The net result is that achieving significant automation of multi-vendor storage environments has been extremely difficult for most IT organizations to achieve.

To be clear, the need for heterogeneous storage appears to be increasing, and not decreasing: enterprise data centers continue to be responsible for supporting an ever-widening range of application requirements: from transaction processing to big data to third platform applications. No one storage product can be expected meet every application requirement (despite vendor's best intents) multiple types are frequently needed.

GreatDe-facto standards can be driven by products that are themselves de-facto standards in the data center, and here vSphere stands alone with regards to hypervisor adoption.  When VMware defines a new standard for interacting with the infrastructure (and customers adopt it), vendors typically respond well.

vSphere 6.0 introduces a new set of storage APIs (VASA 2.0) that facilitate a standard method of application-centric communication with external storage arrays. VMware’s storage partners have embraced this standard enthusiastically, with several implementations available today and more coming.

Considering VASA 2.0 together with SPBM and VVOLs, one can see that many of the technology enabling pieces are now in place for an entirely new storage automation approach. Administrators can now specify application-centric storage policies via SPBM, communicate them to arrays via VASA 2.0, and receive a perfectly aligned storage container – a VVOL.  Nice and neat.

Who Should (Ideally) Manage Storage?

It’s one thing to conveniently specify application requirements, it’s another thing to ensure that the requested service levels are being met, and – more importantly – how to fix things quickly when that’s not happening.

ControllingHistorically, the storage management model has evolved in many IT organizations to be essentially a largely self-contained organizational “black box”. Requests and trouble tickets are submitted with poor visibility to other teams who depend greatly on the storage team’s services.

Although this silo model routinely causes unneeded friction and inefficiency (not to mention frustration all around), it can be particularly painful is in resolving urgent performance problems: is the problem in the application logic, the server, the network – or storage?

The storage management model created by vSphere 6.0 is distinctly different than traditional models: storage teams are still important, but more information (and responsibility) is given to the application and infrastructure teams in controlling their destiny.

PullingVirtual administrators now see “their” abstracted storage resources: what’s available, what it can do, how it’s being used, etc. There should be no need to directly interact with the storage team for most day-to-day provisioning requirements. Policies are defined, VVOLs are consumed, storage services are delivered.

Through vCenter and the vRealize suite, virtual administrators now have enough storage-related information to ascertain the health and efficiency of their entire environments, and have very focused conversations with their storage teams if there’s an observed issue.  

Storage teams still have an important role, although somewhat different than in the past. They now must ensure sufficient storage services are available (capacity, performance, protection, etc.), and resolve problems if the services aren’t working as advertised.

However, operational and organizational models can be highly resistant to change.  That's the way the world works -- unless there is a forcing function that makes the case compelling to all parties.  

And VSAN shows every sign of being a potential change accelerator.

How Virtual SAN Accelerates Change

As part of vSphere 5.5U1, VMware introduced Virtual SAN, or VSAN. Storage services can now be delivered entirely using local server resources -- compute, flash and disk – using native hypervisor capabilities. There is no need for an external storage array when using VSAN – nor a need for a dedicated storage team, for that matter.

VSAN-SizingVSAN is designed to be installed and managed entirely by virtual administrators independently of interaction with the storage team. These virtualization teams can now quickly configure storage resources, create policies, tie them to applications, monitor the results and speedily resolve potential problems – all without leaving the vSphere world.

As an initial release, VSAN 5.5 had limited data services, and thus limited use cases. VSAN 6.0 is an entirely different proposition: more performance (both using a mix of flash and disk, or using all-flash), new enterprise-class features, and new data services that can significantly encroach on the turf held by traditional storage arrays.

Empowered virtualization teams now have an interesting choice with regards to storage: continue to use external arrays (and the storage team), use self-contained VSAN, or most likely an integrated combination depending on requirements.  

Many are starting to introduce VSAN alongside traditional arrays, and have thus seen the power of a converged, application-centric operational model. And it’s very hard to go back to the old way of doing things when the new way is so much better -- and readily at hand.

The rapid initial growth of VSAN shows the potential of putting a bit of pressure on traditional storage organizations to work towards a new operational model, with improved division of responsibilities between application teams, infrastructure teams and storage teams.  And they'll need the powerful combination of SPBM, VASA 2.0 and VVOLs to make that happen.

Change Is Good -- Unless It's Happening To You

Change3I have spent many, many years working with enterprise storage teams.  They have a difficult, thankless job in most situations.  And there is no bad day in IT quite like a bad storage day.

Enterprise IT storage teams have very specific ways of doing things, arguably built on the scar tissue of past experiences and very bad days.  You would too, if you were them.

That being said, there is no denying the power of newer, converged operational models and the powerful automation that makes them so compelling.  The way work gets done can -- and will -- change.

Enterprise storage teams can view these new automation models as either a threat, or an opportunity. 

I know which side of that debate I'd be on. 

------------------

Like this post?  Why not subscribe via email?