We're here at VeeamON 2017 in New Orleans and I'm speaking with Danny Allen, the vice
president of cloud and alliance strategy.
So how did you come to be part of Veeam?
So my background is mostly in the security space and virtualization space.
I was CTO of a company called Destone.
We did cloud hosted virtual desktops and that was acquired by VMware back in 2013.
So I spent a few years there working with a lot of cloud providers offering services
and one of the things that became very evident is the mission critical nature of providing
availability for all of these services.
And so I joined in January of 2017 and it has been a blast.
Having a lot of fun.
Yeah, Veeam is a great company.
So we've been covering the space for a while now and we've been to all the Veeamons.
So we've seen the company grow and it's pretty awesome.
Can you tell us some of the new features that we had?
I know we talked a little bit with Rick earlier and he went over some of the new features
that he was excited about but what are some of the new features that excite you and you
think are going to be great for moving forward with your cloud strategy?
Sure.
So in the flagship product there is Veeam Availability Suite.
Really five big features in that.
The two that I'm most excited about I'll lead with those.
One is continuous data protection.
So this essentially uses the vSphere API for IO filtering and it will intercept the IO
flow from the hypervisor down to the storage and it will replicate that off to your disaster
recovery location.
So that gives you a recovery point objective of seconds as opposed to minutes and so that's
very exciting especially if you're focused on the disaster recovery space.
The second big thing that I'm really excited about in version 10 is the support for object
storage.
A lot of large enterprise customers and also large Veeam cloud and service providers saying
we want to leverage object storage for scale up and scale out characteristics as an archive
tier and so obviously we announced that.
But then there's also a number of other features that are not any less important.
One of those is the universal storage API.
So we have integrations in version 10 with IBM, Lenovo and Infinidad and one of the ways
that we did that was we created a universal API that they could write to so that we can
leverage the snapshotting technologies or capabilities within there.
The other big, another big feature, the fourth feature is agent management.
So we are known as the organization that does protection for virtual machines.
But there's still the requirement to protect physical workloads sometimes that have not
been virtualized or machines that are running in the cloud for example where you don't have
hypervisor access and so we can use the agent-based management within being back up in our application
to protect those workloads.
So a lot of exciting things and that's fairly new for being that now you're not just virtual
machines it's also physical machines which I think is also bringing you into other areas
like security that is becoming a possible solution for some of their security needs as well.
What are your thoughts on the future of where you're going with your cloud strategy?
Well I really kind of grouped the buckets into to the cloud, from the cloud and within
the cloud.
So in the to the cloud space you can see how a lot of customers use us to do back up to
the cloud and that can be virtual machines, it can be physical machines, it can be cloud
machines, it can also be 5th cut capability by the way with NAS storage.
So pushing those up to the cloud is a backup or for disaster recovery.
The second category from the cloud so we often think about onboarding but we don't always
think about off-boarding and there's real business requirements for that, you take something
like Office 365 and you have a lot of critical data up there and so it's your data you need
to protect it and so you pull it down out of the cloud, put it in your backup repository
because what happens if an administrator accidentally deletes a gigabyte of data if you don't have
protection or the ability you have a e-discovery request that says you know go find this particular
material if you don't have control of that data it's hard to comply so to the cloud
from the cloud and then within the cloud we're beginning to see it's emerging but applications
that are born in the cloud but they still need protection.
The cloud is very good at giving high availability but they don't necessarily give business availability
and what I mean by that is if someone intentionally or unintentionally deletes something or causes
an issue it gives you the protection that you need if you have that backup or business
continuity across clouds and increasingly we're seeing this multi-cloud environment
sometimes just because the cloud provider doesn't have a point of presence in a particular
country and there's data sovereignty rules or maybe they have a specific compliance or
security requirement that this cloud has but this cloud doesn't so this multi-cloud ability
to move things within the cloud is also a big driver.
Now I know you guys are doing something with AWS as well and on the customer panel somebody
mentioned that it would be great if they could cover KBM and containers.
Do you see maybe a future of being able to solve those issues and those problems?
Yes so all of the hypervisors of course interact differently with the operating systems on top
so ESX uses a binary translation and Zend as a para-virtualization and so one of the
challenges with that has been dealing with that abstraction, that abstraction letter
and so one of the ways that we deal with it is by putting an agent inside the OS because
otherwise you have to be able to read and understand how the hypervisor functions so
whether it be Google Cloud, whether it be Azure, whether it be Amazon, whether it be
Bluemix, any of these you can put an agent in it, that's true of KBM as well.
However you mentioned something specific which was Amazon, we do believe there's value in
leveraging the new technologies that are available in cloud solutions and so what we're doing
there is an agentless capture of that VM without putting an agent inside doing it at the storage
level and as we go forward you know we look to provide these types of capabilities across
all of these services, it's a question of priority and market share.
I'm a big fan of agentless, I think it makes, especially for IT managers it makes their
life a lot easier not having to worry about do they have the latest agent on this machine.
New 1500 VMs it takes a long time to go and update them all, every time a new agent comes
out so yes agentless definitely is the preferred approach if it's possible.
Great, well it was great to speak with you and thank you for taking the time to speak
with VMblog, wish you success in the future and we'll keep an eye out on what you guys
are doing.
Thanks everyone, thank you.
