Blogs

Mar 2010
15

Protecting Data in Virtualized Environments

 

Image of building exploding into pixels

IT budgets are shrinking, datacenter space is becoming limited, power requirements are skyrocketing, and server and storage needs are ever increasing. Today’s IT managers from the small SMB to the large enterprise are struggling to meet their company’s needs. Server consolidation is fast becoming one of the primary tools being used to address this new paradigm. This is driving the deployment of virtualization across enterprises of all sizes. More and more companies are turning to solutions like VMWare, Microsoft Hyper-V, Citrix Xen, and Parallels. However, to fully realize the benefits of virtualization, organizations must also consider their information recovery management strategy. Traditional backup and recovery strategies are not adequate to deliver the granular recovery demanded by the business. More important, the cost associated with traditional or agent based technologies essentially negates many of the cost advantages of virtualization.

As well, an often forgotten part of the equation in implementing virtual server environments is reining in the ensuing costs and complexity that virtualization introduces. Nowhere do these management costs and complexity become more evident than when a company goes to protect its guest VMs.

A specific challenge emerges when protecting the data of guest VMs. Each physical virtualization server may host ten or more guest VMs that individually host different operating systems and applications. Data protection software now needs to account for the following variables when protecting guest VMs on the same physical virtualization server:

Discovering when new VMs are created: The ease in creating guest VMs can result in lapses in data protection since companies can forget to configure data protection software to protect new guest VMs as they are created.

Agent installation: Each guest VM acts and functions like a normal server. To protect each guest VM, companies may need to install and configure an agent on each guest VM just like when they were protecting individual servers.

Scheduling backup jobs: As servers are consolidated and virtualized on the same physical ESX server, scheduling backup jobs across the different guest VMs becomes more challenging. Schedule too many backup jobs at the same time and the jobs compete for the server’s limited resources. Schedule backup jobs too far apart and they may not complete before the next day’s production activities begin.

Different backup products: Guest VMs on the same physical machine may use different backup software due to different operating systems and/or applications on each one. This adds to the complexity of managing and scheduling backup jobs across guest VMs on the same physical server.

From an information recovery perspective, the trend towards virtualization has a major impact on information recovery management:

  • Applications run isolated and utilize the hardware more efficiently. Before virtualization, any backup activity could likely afford to take away some hardware resources from the underutilized application server (CPU power, RAM); with virtualization this may add up to too much. A small 5% resource utilization for a backup agent on a physical machine may go unnoticed, however in a Virtualization Server running 10 O/S instances, this will add up to 50% resource utilization (on top of the fact that the virtualization server is not under-utilized anymore, as the physical machine was).
  • Any hardware connection will likely be abstracted away. Data protection software cannot rely anymore on the physical location of data remaining fixed, as storage resources may be migrated at any time by IT administrators to improve processing efficiency.
  • The introduction of console-less (COS-less) virtualization servers prevent backup applications from running on the virtualization host operating system.
  • Most applications are expected to be available as a “virtual appliance,” isolated from other applications. This applies to backup applications as well, where an ability to deploy a backup appliance on both physical and virtualized servers, as well as centralized deployment without any agents, is required.

In part 2 of this article, I will show how Asigra addresses the needs and concerns raised above through its agentless remote backup and recovery architecture.

Spice IT Email Post
Dec 2009
10

I Want to Restore my Data but . . . It’s Not There!!

Posted by Larry Bourgeois in Featured
 

This post is inspired by a tweet I saw recently from @thedren about having recovered data that was corrupted.

As a Solutions Engineer for a backup and recovery software vendor, the following customer conversation is all too familiar to me –

Customer – “I would like to talk to you about your backup software solution.”

Me – “OK. What is wrong with your current solution?”

Customer – “I need to replace it. It almost got me fired.”

Me – “That is not good. What happened?”

Customer -“I needed to restore our production accounting database and when I tried to restore I found out that the backups were corrupt and I had no usable data written to disk or to any of my backup tapes.”

Me – “Did the backups show any errors?”

Customer – “No. All the logs showed successful backups.”

Me – “Did you get your data back?”

Customer – “Yes. We were able to find an older backup from our off-site archives that was successful, but we had to spend many hours bringing the data up to date. Saved my job though!”

Me – “Do you have a policy in place to perform test restores as a part of your DR solution?”

Customer – “I did not before, but I will have a restore testing policy in place when I implement my new backup solution!”

Sound familiar? I would hope not, but how many of us would be in this customer’s shoes if a critical restore is needed? Although restore testing should be a part of any company’s DR plan, many companies either ignore this important function or are just too busy to take the time to perform any restore validation of their backup data.

The solution to this issue is to look for backup and recovery software solutions that have a built-in functionality to perform a test restore by reading data from the backup server to verify its integrity. This operation will ideally take place entirely on the backup server so no valuable outside network or system resources are used. No data needs to actually be restored to the source and this operation can either be initiated manually or scheduled as a part of the backup policies. Needless to say, the recoverability of your data should be the key focal point of discussion when looking at newer and better solutions. Backups are no good unless the data can be restored correctly. Don’t let this be you!

Spice IT Email Post
Nov 2009
27

Selling on value? Then you’ve got potential!

 

Having worked in the storage industry for 20 years now, I have seen a lot of trends and technologies come and go. Right now there is trend that is hot and everyone wants a piece. That trend is the cloud and in particular, cloud backup. As an ardent follower of newsletters and blogs, I’ve noticed, especially in the past couple of months, a proliferation of companies and products offering cloud backup. It seems like almost daily there is an announcement about a new company or product that is touted as the next great thing in remote backup and DR.

Spice IT Email Post
Dec 2009
11

28th Gartner Data Center Conference Highlights

Posted by Adam Mattina in Cloud Backup
 

 

Spice IT Email Post
Dec 2009
2

Backup of a mobile workforce

Posted by Adam Mattina in Cloud Backup
 

Backing up a Mobile WorkforceAsigra’s worldwide presence among service providers allows our team continuous exposure to some of the most complex business continuity and disaster recovery requirements. Building solutions to meet those dynamic customer requirements and compliance regulations is exciting and challenging work.

The loss of sensitive data is occurring almost daily throughout the world. There is even an organization dedicated to tracking data loss events world-wide: http://datalossdb.org/ (you can subscribe to the RSS feed so you know as soon as the news is posted). When data is lost, a company’s image and brand suffer while the tangible per-record cost is paid out to all those placed in harms way. Users are creating data constantly, whether working at home, in the office or en route on a business trip.

Laptops, while highly portable, often contain the most valuable data and are the easiest to lose in transit. Securing data on a laptop is quickly becoming an industry standard. Hardware and software methods are available from several vendors to address this issue. However, ensuring that sensitive data is captured, transmitted and stored in a secure manner may not be as straight forward. If the latest deal, acquisition or presentation is created while on the road, how is it backed up and protected when a machine is left in a taxi or hotel? Several “workaround” solutions are available. Users can copy sensitive data to an encrypted USB drive, connect to a corporate VPN and send it to a share or email it to themselves. None of these options provide a reliable, consistent and secure solution.

How can an IT department or a small organization protect valuable data as it is created, regardless of where the user is? Disk-to-disk data vaulting is a great option. A few things are needed for this solution to be implemented: bandwidth, security, data reduction and automation. 

Available bandwidth may impact the efficiency and productivity of a user while on the road. Backup activities should complete in as little time as possible and have the smallest possible impact on the mobile workforce. This can be accomplished through scheduled or continuous backups with CPU and bandwidth throttling at the application level. Backups will occur but be invisible to the user and will not require any interaction to complete successfully. Data reduction can be performed with compression coupled with block-level deduplication. These steps will reduce the amount of data to be sent, further increasing efficiency. Finally, wrapping this process with encryption and including a method to authenticate, authorize and verify the authenticity of an endpoint covers the necessary security requirements.

Recovery is equally as fluid and secure as the backup. One application interface from which the user can recover the version of a file that they need or recovery of a whole machine in case it is lost or damaged. If the user cannot navigate the technology, the helpdesk is able to remotely recover data directly to the user’s machine. More on recovery next time!

Spice IT Email Post
Feb 2010
11

IDC’s Top 5 Cloud Applications for 2010

 

While the “cloud” term has been used and abused by everyone under the sun, there are in fact true cloud applications that deliver value to customers looking for data mobility and shared hardware/software costs, particular for data or content-heavy applications.

IDC recently released a report* outlining the top 5 cloud applications organizations are looking to pursue in 2010.  They are as follows:

1. Collaboration applications
2. Web applications/web serving
3. Cloud backup
4. Business applications
5. Personal productivity applications

How many of the five are you (or your customers) using?  Do you agree or disagree with this forecast?

*Source: IDC, Cloud Computing 2010, an IDC Update, Doc # IDC_P20476, , September 29, 2009

Spice IT Email Post
Syndicate content

For more information

Get insights about cloud backup and recovery direct to your inbox every month.
Subscribe to our Newsletter
 
Stay connected to the latest data protection insights – subscribe to our blog.
Subscribe to our blog
 
Got questions for one of our recovery specialists?
Need Answers to your Questions?
 
Print this page
Email this page