Subscribe Now

* You will receive the latest news and updates on your favorite celebrities!

Trending News
The Evolution of the Hyperscale Data Center Provider – From Real Estate Firm to Tech Company
Business Center

The Evolution of the Hyperscale Data Center Provider – From Real Estate Firm to Tech Company

Hyperscale data center providers are nothing more than glorified real estate companies that build great big boxes with redundant power, good connectivity and quality HVAC systems. Right?

Wrong. While that may have been partially true in the past, it’s nowhere near the truth for today’s data center providers and colocation companies. That’s because today’s data center providers aren’t real estate companies anymore, they’re technology companies. And I’ll prove it to you.

Let’s explore how data center providers and colocation companies have evolved with hyperscalers and cloud providers, and why the demands of these large customers are reshaping the way that data center providers think and operate.



All about land, power and resiliency


Go back in time a decade and look at how data center providers were making site selection choices and building their data centers. You would have seen them following a very simple, yet effective, methodology. Identify the major markets where hyperscale companies had business demand and then build large facilities in those regions. Specifically, providers would target places where land was available, power was plentiful and there was the potential for acceptable connectivity.

In those days, the emphasis was on being physically located in a handful of tier one markets where the customer wanted to be and on building a data center that was as resilient as possible. This meant building data centers in markets such as Silicon Valley and Northern Virginia with a focus on redundant power, physical security and other features that ensured that the hardware within the data centers was always powered, cooled and physically secured.

It was an almost “Field of Dreams” approach to data center construction – data center providers would build data centers in these key markets that met all of the requirements, and companies would lease the space. It’s a familiar approach – a real estate transaction – where one party has a building on land in a desirable area and leases it to another party that wants access to the land and building.

But that’s not what site selection, data center development and lease transactions are like anymore. Now, hyperscale customers require a more consultative approach. They require more of the data center provider. And there are more requirements than ever before that data center owners and operators need to keep in mind when going through the site selection and construction processes.



New apps have new requirements


To understand how the industry is shifting, let’s look beyond the hyperscale companies and cloud providers. Let’s also take a deeper look at the organizations that buy and use the services of these hyperscale companies and cloud providers. The ridesharing companies. The homestay marketplaces. The online stores. The mobile applications.

The services that hyperscalers and cloud providers deliver to customers need to offer and maintain a high standard for speed and user experience. Yet, the applications that are running in these environments are complicated.

In fact, many of these applications – from the ridesharing apps that connect users with cars and drivers, to the mobile business review sites that help users pick where they’re eating dinner – are an amalgamation of multiple components or microservices. Today, it is possible that many of these microservices will be housed in different data modules, different buildings or even different campuses.

This is shaping and evolving what those companies – and the hyperscale companies that service them – are looking for in a data center.

Now, I’m not implying that the traditional infrastructure requirements are changing. They’re not. Data centers still need to be resilient, have redundant power, and be in places with access to a reliable energy grid and renewable energy. None of that is going away. But there’s now a real need to have hyperscale data centers in not only top tier markets, but also markets that were once considered secondary or edge markets. And those data centers need to be built with an incredible amount of attention to the network and connectivity.

When you look at a modern application with multiple microservices, you realize that one of the largest killers of user experience is latency. If all those microservices come together to power one app or enable one larger user experience, latency between them can be the difference between that application being usable or downright infuriating.

Take a ridesharing application as an example. There is a microservice that identifies a user’s location and identifies the closest car and driver. There is a database that maintains and stores the driver’s information, license plate and customer rating, and that data needs to be retrieved and displayed on the user’s screen. There is likely another database with the user’s credit card information that seamlessly bills him/her for the ride. All these things need to happen to make just one trip possible. If there is latency in any of those microservices and processes, the entire application – or at least the user experience – could suffer.


“…microservices come together to power one app or enable one larger user experience, latency between them can be the difference between that application being usable or downright infuriating.”


This is why connectivity, more importantly, low latency connectivity, is critical. Each data center needs to have multiple points of entry and operate with multiple carriers. As businesses break down the walls of the data center and potentially place microservices in disparate locations, distance becomes paramount. Service providers need to rethink interbuilding connectivity to ensure that ultra-low latent microservices can potentially live not only in a different room, but potentially in different buildings or even different campuses. Providers need to take a deep look at their carriers, and make sure that overall fiber distances between sites and cloud locations are low. In today’s world, customers are no longer talking double-digit microsecond roundtrips, rather, in some cases they are looking at two microseconds or less, which changes the site selection game.

This is also why location and site selection are evolving. What used to be considered a lower tier or edge market may now be a tier one market. Companies with latency sensitive applications need to be in locations that are close to their customers to ensure that applications are operating as efficiently as possible for an optimal user experience.



But . . . a technology company?


So far, I’ve explained a few reasons why data center providers have reshaped their site selection and data center development processes to meet this shifting demand from hyperscalers and cloud providers. I begrudgingly admit that doesn’t make data center companies technology companies. What has made them technology companies, however, is the work that they’re doing with their customers to ensure facilities are aligned with the applications residing inside of them.

To truly serve these companies today, data center operators need to be much more consultative. They can’t just hand a prospective customer a lease and collect the rent each month; they need to work with customers hand-in-hand to identify and understand their requirements. That means learning more about their applications and what makes them work. That means gaining important insights into the user experience, the connections necessary with other data centers and large cloud providers, and partnering to provide them with a solution that meets their needs.

Today, more than any time in the past, data center providers are working with their customers to understand what will be running in their data centers. For example, application and systems architectures have a direct impact on the way data modules are laid out. Providers may need to increase the size of the data modules, and/or they may need to look at densities associated with specific server types. Floor loads may need to increase in order to support greater rack densities. Application and server architecture can even have an impact on the way power is distributed to the racks.

All the decisions that developers make have a direct ripple impact on structure. We know that these applications rely heavily on network connectivity. By gaining a better understanding of how a cloud provider’s networks are structured, we as providers can leverage this knowledge to fine-tune our site selections and network designs. Providers start to put network density and proximity at the top of the site selection list. We start to change the way we interconnect our buildings. We may even change the conduit infrastructure during the build to handle the large amount of cabling required to meet the demands of the applications.

The game is no longer the same for hyperscale companies and cloud providers. Their end users and customers have shifting requirements which create an environment of dynamic application and architecture change. Data center providers that are stuck in the past are missing the boat. Today, application architectures are applying new and unique pressures on all attributes of the data center – power, cooling, security and connectivity. For the hyperscale companies and cloud providers to meet their customer requirements, the data center operators need to evolve with them. Today, it’s the data center provider that thinks and acts like a technology company – not a landlord – that will best meet the needs of their customers.

WordPress Theme built by Shufflehound. © 2024 Vantage Data Centers | Terms of Use | Privacy Policy