Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Another Dark Side of the Cloud: Data Center Migrations

April 23, 2015 No Comments

Featured article By Irwin Teodoro, Datalink

cloudy_sky-FINAL

As business units and IT organizations move to the use of public cloud services, crucial application details often go missing. Some groups might not be particularly concerned by this. After all, as long as your application still performs well, why track the specific hardware used to support it? In the age of service level agreements and shared, multitenant architectures, does it really matter anymore if you know the physical server, storage system or network supporting your application as long as application performance meets users’ service level needs?

Yes, as a matter of fact, such details still matter—especially if you want to relocate, move or consolidate one or more of your data centers. For the same reason, infrastructure details associated with each application also matter after a company merger or acquisition. They definitely matter if you ever need to bring an outsourced application back in-house. When such critical details become lost or difficult to obtain, it can greatly complicate or delay your data center plans.

Why are application details so important? More importantly, what can happen when you don’t have them?

The Devil’s in the Details

Datalink oversees many data center transformations, including companies who are starting to deploy applications in the cloud. In the case of data center moves, a large part of the early process is spent identifying current applications and their business function. It’s also spent identifying application dependencies. We look for the infrastructure components (server, storage, network or database systems) related to the application.

This early fact-finding mission helps us perform another critical function prior to the move or migration event: infrastructure mapping. Before you can successfully upgrade, move or replace a specific rack of servers, you need to determine which applications and business functions rely on that part of the infrastructure. Keeping an application focus, we then map dependencies of that application within the infrastructure.

Another way to think of this process is ‘application bundling’ (see diagram). You minimize risk of unexpected downtime by moving or migrating bundled groups of dependent items at the same time.

Source: Datalink

data center 1

Application bundling identifies dependencies in the underlying infrastructure

 

 What Happens When Details Are Lost or Unavailable?

I have witnessed several cases where a customer’s application data resided at a colo provider or public cloud facility. For whatever reason, the customer concludes they now need to bring the application in-house. This may be due to performance reasons or, more often, a newly identified compliance or security risk that now requires the company to house the data on the premises.

Here are some of the challenges we encountered:

– The biggest challenge was identifying what data was where. Outside providers are not often willing or able to share specific details or documentation about their hardware infrastructure, let alone how it impacts your application. While some provide orchestration tools or dashboards that can be helpful, most are unwilling to allow the use of third-party environment discovery tools. For some cases, instead of detangling a customer’s application from their infrastructure, we have seen the provider sending tape versions of the customer’s data, which then have to be restored from media.

– Maintenance windows, where needed, had to be planned months in advance. In the age of multitenant architectures, we needed 4 months’ negotiation and planning time with one provider in order to accommodate our need to restart a storage system before bringing up a new application. This was due to the fact that the storage system was shared by multiple customers.

– Outsourced applications, when brought in-house, can present unexpected challenges. If no local IT expertise is available, the company may need to plan for new support and training costs to manage the application themselves.

Advance planning before you contract with a service provider can save some of these headaches. Much can also be avoided by developing a framework for candidates suited to being hosted by an external cloud provider. When you work with service providers, it’s important to also plan for contingencies as well. This means documenting the process you would take to move your data out of the provider’s facility, if necessary, as well as documenting what level of support the provider can offer to help you with your own future migrations, upgrades or system maintenance needs.

Irwin Teodoro is Datalink’s National Director, Data Center Transformation, a practice that spans data center consolidation, data center relocation and data/infrastructure/application migration.

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

DTX ExCeL London

WomeninTech