Engineering a Better Digital Infrastructure – Five Data Center Considerations for Network Leaders
This is the third article in a series where we’re exploring the IT and data center considerations that leaders should keep in mind as they grow and expand their businesses. In parts one and two of this series, we explored five important IT and data center considerations for the chief executive officer (CEO) and the chief financial officer (CFO). In part three of this series, we take a closer look at the considerations for network leaders.
Network Leader Considerations
As companies invest in the data center and IT infrastructure necessary to operate both their customer-facing applications and business solutions that keep their organizations running, it’s the job of network leaders to ensure that their digital infrastructure is always connected.
But, as any data center operator will tell you, a lot of work goes into ensuring that data center infrastructure is connected and that those connections are optimized to both maximize resiliency and reduce latency.
How each hyperscaler, cloud provider or large enterprise data center customer manages their network connectivity can differ. Some may want to completely control it themselves. Others will want to work with a large, reputable telecommunications company for their network connectivity.
Regardless of how a company and its network leaders choose to connect their data centers, there are some universal considerations that they should keep in mind when making data center decisions. Here are five important considerations for network leaders as they make digital infrastructure choices to support the growth of their organizations:
How resilient and redundant is the connectivity to the campus?
Connectivity is one of the most important aspects of a data center. In addition to location and power, there are few attributes which are as essential or as impactful to data center selection.
When it comes to choosing data center locations and providers, it’s critical that the provider offer a range of connectivity choices both into and out of the facility or campus environment. What’s also important is that connectivity be assured. And if connectivity is to be assured, then it needs to be resilient and redundant.
When choosing a data center partner, network leaders should be looking for one with strong relationships with carriers who can provide influence into the design of the network around the data center facility or campus. In general, data center providers strive for diversity, so having at least two different carriers with service in and around the data center footprint is important. Ideally, the provider and the carriers have extended two or more diverse fiber loop laterals to the campus boundary ensuring both local and wide-area diversification.
Route diversity into the campus is important to route redundancy. Having all the Zero Points-of-Entry (ZPOE) lie on the same loop could present unwanted risk to a customer.
Imagine someone using a backhoe on a road or infrastructure project adjacent to a campus accidentally cutting the “redundant” connections coming in on a single loop. All connectivity will be lost. This is why data center providers, like Vantage Data Centers, work to ensure there is diversity in carriers, and that those carriers deliver diversity in their ZPOEs.
How resilient and redundant is the connectivity to the building?
Much like how the fiber leading into the campus needs to be resilient and redundant, the fiber within a data center campus also needs to be protected and assured.
To accomplish this, fiber should be brought into the data center campus via discrete ZPOEs using underground pathways separated by a minimum of 500 feet for safety. Within the campus, there should be a robust conduit plan that provides adequate distancing and pathing for network runs that protect against both congestion and route crossings, ensuring additional resiliency and redundancy.
Similar, redundancy should exist within the data center itself. Each data center should offer multiple Points-of-Entry (POEs) – the demarcation points where fiber enters the data center. Ideally the building POEs should maintain adequate distance and redundant paths between one another and all data modules.
Network leaders should expect pathing to be a baseline in building design, but ideally, buildings should be laid out in such a way to ensure route separation regardless of the origination and destination points in the building
Finally, the amount of available fiber can’t be overlooked as it can be a long process to connect fiber to a POE. Network leaders should be aware of the available strand count and know if there is enough capacity to meet their current and future needs.
How does my application work, and what does that mean for my network requirements?
Each application has different requirements. Some, such as the disparate microservices which make-up a ride-sharing app or a social network, simply can’t tolerate latency. If that’s the case, then network leaders must think much more carefully about where their applications will reside and how the IT infrastructure is distributed geographically.
Applications can be complicated. They’re often not just one, big application – they can be an amalgamation of many smaller applications or microservices. If an application has multiple parts or microservices that combine to deliver a service to the user, each of those individual microservices, based on latency requirements, need to be in the same data center, same campus or a relatively close geographic location, or the application may slow to a tortoise-like crawl. Network leaders need to plan for adequate connectivity between data center modules in the same facility, as well as interconnectivity between buildings, to ensure applications work optimally to deliver the best user experience.
When it comes to reducing latency, there are key drivers located both inside the fence – within the data center campus – and outside of the fence – the larger area around the campus.
Inside the fence, data center providers should be working to minimize hops between infrastructure pods. Each hop that is introduced risks light loss and increases the potential for jitter. Data center providers should ensure that conduit systems inside the fence allow for as many direct connections as possible to eliminate hops.
While it may seem extraneous to worry about what happens outside of the fence line, there are factors outside of the campus that could impact latency. Data center operators may not be able to control what happens outside of their fence line, but when choosing a location, they need to understand the overall latency lay of the land. The operator should work with their carriers to understand fiber distances between their location and other critical network junctions in the area. Wherever possible, the data center operator should work hand-in-hand with carriers to influence designs which limit fiber distances and overall pathing to outside entities.
Large route distances outside of the fence line could create significant latency. And route crossings outside of the data center fence line could have a direct impact on resiliency – making it easier for a single incident or accident to cut multiple fiber lines.
Location, location, location
Location matters. Today’s advanced applications operate better with lower latency and need to be hosted as close to the end-user as possible.
Network leaders need to think about their applications and how location can play into the application’s performance. If their application relies heavily on static content, network leaders will want to cache that content at the edge, close to the user. If content is dynamic, that may not be a possibility, and their applications may need to be housed in a centralized hub.
Large data center consumers should look to select their hub locations based on user density. When deciding where to put that hub, they need to understand the impact of latency on their application and how far away their data centers can be from users before the application degrades. This means understanding the metro area’s connectivity and the long-haul distance to foreign metros.
For example, if there is a large user base in Philadelphia, and the application will degrade if it’s hosted in New York, either New York shouldn’t be chosen as their primary data center location, or a network node may need to be established in the greater Philadelphia area.
When choosing a data center provider, it’s essential to select one that has data centers with availability in desired hub locations. And, while this is always important to consider, it becomes essential when a company is looking to expand and grow.
What are my international requirements?
A company may have eyes for expansion. But the market where they see the most opportunity for them and their solutions may be an ocean away. Technology and hyperscale companies often see the most greenfield opportunities on the other side of a massive body of water and on a different continent.
For example, oftentimes American companies are looking to grow their revenue and market share by expanding into Europe, Latin America or Asia. Likewise, companies in those geographies may seek to mimic the success of applications like TikTok and build a large user base in the U.S.
International expansion can be tricky in today’s world – data sovereignty and privacy regulations have a direct impact on what data can be stored in which country and what kind of data can leave a country’s borders. To identify what data center strategy will work best for their international expansion, network leaders have to first understand their application, its requirements and how susceptible it is to local data privacy regulations.
Does an application have any features that require compliance with local privacy laws? If yes, then a data center presence may need to be established in country. If the answer is no, then latency may be the primary factor in the decision on location.
If a company’s application can function in a geographically distributed footprint, international expansion could be enabled with strategically placed data centers – ones located as close as possible to the country a company is looking to expand into – using sub-sea cables.
For example, Miami, Virginia Beach or New York City could be ideal locations for expanding access to an application to Europe or other points due east. Los Angeles, Portland or Seattle could be ideal locations for expansion to Asia and other locations due west.
Should the application have usage profiles that simply cannot handle the roundtrip times associated with circumnavigating the globe, then network leaders need to look local. In those instances, the selection requirements previously mentioned come into play, with new attention being paid to local competition, and more importantly, local laws and regulations that may change the dynamics of networking in country.
All things considered . . .
Network leaders have an important job and face many difficult decisions impacting how their applications operate and what kind of user experience they’re delivering.
If applications are going to meet service level agreements (SLAs) and uptime requirements, the networks that connect them need to be resilient and redundant. If they’re going to work quickly and efficiently – delivering a quality user experience – the connections between infrastructure concentrations need to be optimized to reduce latency and increase resiliency. And underpinning all these considerations is the simple fact that location matters – especially for applications that are required to be near a user base.
Working with a data center provider that can partner with network leaders to meet these requirements is critical to the success of the applications running in each environment. Even the best data center – without connectivity to the outside world – is just a building unless it’s resiliently and redundantly connected to the internet.