Weighing Your Digital Infrastructure Options for Enterprise Applications
A decade or two ago, if there was a company that was looking to develop, test and roll out a new application across its enterprise, it really only had one option for hosting that application – a traditional data center.
IT would analyze the requirements of that application and procure the equipment, software and networking components that the application required, independent of other applications. As the number of applications grew, businesses were forced to build data centers that would provide the cooling and power necessary to run the hardware that would power the application.
Fast forward a decade or two, and the world looks a lot different – a lot more complex, as well.
Today, companies looking to develop and roll out new IT capabilities and applications have a lot more infrastructure and hosting options. Whether they’re a financial services company looking to launch a new customer-facing online banking tool, or a company that’s going to fundamentally change the way we think about ownership by enabling the sharing of houses or cars, they don’t have to physically build infrastructure, computers or a data center for hosting their new applications – although they still certainly have that option.
What Has Changed About Data Centers?
First, there was the introduction of the cloud – infrastructure available as a service that effectively enables companies to build, test and host their workloads and applications on shared, third-party infrastructure that is geographically distributed around the globe. There are several cloud vendors and companies that are making this a reality, including AWS, Google Cloud and Microsoft’s Azure.
Then there was the introduction of colocation data center providers offering leased data center space. These companies effectively build the data centers necessary to house the compute, storage and network infrastructure that a company needs to operate their applications. While the company may still need to purchase and install the hardware and software necessary to deploy their business applications (depending on if the outsourced provider offers retail or wholesale colocation), they’re not on the hook for finding and buying the real estate, and physically building the data center.
With all these hosting options now on the table, which is the right one for your business? Part of this decision comes down to the application or workload that a company is looking to host. Let’s take a closer look at the different applications that companies might need to host, and whether the cloud or a standalone infrastructure is right for them. Then, in a subsequent article, we’ll look at the applications that might need to be hosted in data centers, and how to decide whether to build, buy or lease a facility.
Why host in the cloud?
When a company elects to utilize a cloud provider for hosting their application, they’re effectively hosting that application in any number of disparate data centers that the cloud provider operates across the globe.
Chances are, workloads or applications may be split up and operated in different data centers within a specific geographic region. For this reason, an application or workload that is going to be hosted in the cloud should be modular in nature – capable of being split up – and shouldn’t have a proximity requirement between those modular parts.
Aside from modularity, companies should also think about the scale profile of an application that they’re planning to host in the cloud. Applications that scale slowly and stay utilized at a relatively consistent rate don’t require the scalability benefits that are inherent in the cloud. However, an application that sees bursts of increased utilization can benefit greatly from the scalability and flexibility of the cloud.
This scalability and flexibility also make the cloud a good choice for application development since a company can ramp up the resources it needs to develop an application without having to pay for both application infrastructure and physical data center space. This makes application development less expensive and makes the cost of R&D much lower.
Finally, there’s the question of ubiquity. Is an application or workload going to be accessed from multiple locations across the globe? If it is, the cloud may be a good option since cloud providers already operate data centers around the globe and that application can be geographically distributed in such a way to decrease latency and make the experience better for users.
That being said, there are a lot of workloads and applications that companies wouldn’t want to host in the cloud. In my next post on Data Centers Today, we’ll look at why a company would want to keep an application in a physical data center that it owns or leases, and considerations for building, buying or leasing a facility.
Steve Conner serves as vice president, solutions engineering at Vantage Data Centers. He is responsible for leading the company’s sales team on technical requirements in pursuit of new business.
Conner has more than 25 years of experience in building and leading highly motivated sales and engineering teams. Prior to Vantage, Conner led the sales and engineering teams at Cloudistics, taking the start-up’s revenue from $0 to over $5 million in its first year of selling. He held multiple senior level positions at Nutanix where he built a multi-million-dollar business unit focused on managed service providers.
Conner holds a Bachelor of Science degree in computer science from University of Richmond, a Master of Science degree in computer science from George Mason University, and an MBA from Florida Institute of Technology. As part of his focus on technology and enterprise architecture, Conner has earned multiple certifications including CCNP/DP, CISSP and ISSAP.