Published On : October 21, 2024
0
Share
In this digital world, the market for hosting, storage, and cloud computing is expected to reach USD 163 Billion in 2021. As this market witnesses extensive growth, the on-demand data centers must deliver efficiency in every aspect and maintain seamless connections 24/7 to stay competitive.
Regardless of the size of your enterprise, and whether you are a cloud service provider or a business looking to keep data on-prem, good design is critical to creating and maintaining an efficient data center.
Although there has been a 10.3% decrease in data center expenditure due to the pandemic, Gartner expects to see growth in this area through 2024 with $200 billion invested for data center infrastructure alone to meet demands.
As hefty investments are being made towards building or refurbishing data centers, these 7 considerations will help design a fail-proof data center for your business to thrive safely.
First, plan your data center design and then plan the costs around it. These TCO parameters must be taken into account while making the plan as well as the budget.
CapEx: The investment required for purchasing IT and cooling equipment, location, certifications, and other obvious elements.
OpEx: Maintenance and operation expenses are usually neglected. Failure to provide adequate training and provisions to the personnel maintaining and operating equipment can lead to risks that can be avoided otherwise. Allocate funds for personnel handling equipment and infrastructure supporting maintenance and operation.
Energy costs: Estimate power consumption and the infrastructure costs needed to keep your systems online 24/7.
To build a holistic plan, you will need to consider these three parameters while giving room for growth and expansion (See 6). Estimate your data center needs, right from the start, to allocate reasonable funds.
Data Centers depending on the size, consume from 200 TWh to 500 TWh annually. This is where the most investment is made and requires mission-critical, strategic planning.
“Data center owners have so many problems right now. Their assets are mission-critical, but they are out of control. Power consumption is costing them a fortune. They can’t cool what they have got and cut the risk of a catastrophic outage. And if they make an investment, by the time it is built, it is already out of date” – Stanford Group
In tier 3 data centers, multiple paths for power and cooling like UPS and backup generators can maintain the systems without taking them offline. In contrast, Tier 4 data centers are enterprise-class, completely fault-proof, and apply redundancy for every component across the data center infrastructure. Both these classes keep systems online either way. Ten years ago, tier 4 data centers were in demand for their redundancy. But today, tier 3 is sufficient for most companies.
Calculating power consumption based on requirements for your Data Centre in the design phase itself can save a lot of time and money in the future.
These estimations will help determine several other factors like location, energy costs, adoption of green energy, expansion facility, etc., which we will discuss further.
You can also consider going eco-friendly by adopting Green Initiatives. Most companies focus on “Speed & Feed” concurrent maintainability, Power Usage Effectiveness (PUE), and Leadership in Energy & Environmental Design Certifications while planning Power Infrastructure. This is good practice.
When systems run 24/7, they get heated up extensively. Data centers need to adopt different cooling techniques, ranging from air conditioning to sophisticated liquid cooling techniques, to protect IT equipment from heat damage or other hazards.
The cooling technique you adopt, as mentioned earlier, determines power consumption, energy costs, location, and other small features of the data center.
Traditional Air Conditioning units: Industrial air conditioners consume more energy but produce and circulate chill air to keep data centers at optimum temperatures.
Water Cooling units: Wet cooling methods have been more efficient than most methods. Most data centers are built near large water bodies when adopting this technique.
Outdoor Air Circulation: Regions with considerably lower outdoor temperatures can circulate outside air into the site to moderate indoor temperatures.
Localized Cooling Units: Cooling units are placed in “warm rows”. When planned efficiently, air need not be transported using ducts. It also allows precision cooling of the IT equipment.
A smart airflow plan can reduce cooling expenses by 30%. Before determining the cooling technique for your data center, study the existing conditions in and around the site with CFD Analysis.
Mechartes, by giving you insights from CFD analysis will help you determine the best cooling technique for your needs and saves energy. They have studied the airflow and heat generated due to outdoor environments in high-rise buildings and found that it impacted the indoor conditions of data centers by using CFD analysis. The 3D models study heat transfer patterns and airflow within and outside data centers to help predict airspeed and temperature range under diverse operating conditions.
By using similar studies, the type of cooling technique can be determined, and concepts like hot/cold aisle, raised floors, filler panels, and physical barriers between aisles can be implemented in appropriate conditions.
Poor cabling can affect data transfer, airflow, and cooling efficiency, take up more space and become a problem when expanding or adding additional servers.
While average data centers opt for copper or fiber optic cabling, consider going the extra mile when designing or improving cable infrastructure. To stay competitive, focus on minute details like these, which can ultimately add to your fail-proof data center design.
Security in a data center facility is vital to ensure the untoward does not happen. Security breaches can lead to unwanted dust accumulation which puts a strain on maintenance, unnecessary downtime caused by careless mistakes of workers or unauthorized personnel, or worse, data theft.
CCTV monitoring, fencing, multi-factor authorization, and biometric scanning are some of the security features you can consider for your data centers with or without containing confidential data. These security features are mainly meant to keep track of those with access to the facility.
But a facility for clients with sensitive data like healthcare sectors requires security personnel manning the exit and entry points and between floors to restrict access only to authorized personnel.
In the case of colocation data centers, Intelligent Monitoring Software is a client favorite worldwide. It seamlessly gives the clients visibility and control of the power and security of the facility. Using RFID technology, they can track the power consumption, bandwidth spikes, etc., of their assets.
According to a study by Gartner, an average Data Centre is 9 years old, after which it maxes out on capacity. Most Data Centre sites become obsolete after 7 years. The only solution to fix a failing Data Centre is to build a new one. Obviously, this will not come cheap. If you don’t want to burn a hole in your pocket, consider adopting a data center design that is flexible enough to allow for expansion.
Usual plans while building a data center are based on:
Watts per sq. feet
Cost to build per sq. feet
Tier level
These factors do not leave room for growth, leads to poor use of capital, and increases operational expenses. These plans are misaligned with business goals and risk profiles.
Without overbuilding your data center, you can plan for expansion by predicting your needs and devising a mission-critical design, starting from TCO parameters to finer details like cabling.
Switch, founded in 2000, is a colocation data center that has topped Google and Apple by going eco-friendly in 2016. Using solar and wind energy supported by geothermal plants to power their data centers, Switch consumes 850 mW at 4.9 cents per kilowatt-hour.
The location of 4 of their main campuses has been carefully chosen to withstand natural disasters as well as provide a secure facility for its clients to host and store their data. It has a 17.4 million sq. ft campus and offers a 1.3 million sq. ft warehouse called “The Citadel”. It uses a 500-mile network of fiber optic cable called the “super loop” to link Los Angeles, Las Vegas, and the Citadel. These cables provide high speed and low latency between all sites and their surroundings.
In addition, the facility is protected by high concrete walls, and double roofing called the “Switch Shield” to withstand strong winds from the Nevada desert.
Guarded by ex-military or similarly qualified security personnel and in-house fire service to tackle any breakouts within minutes, The Citadel has become the ideal solution for colocation and stands as one of the most trusted data centers globally with 100% uptime and mission-critical security at reasonable prices.
Tahoe Reno 1 has already reached 80% capacity and is now focussing on expanding to host uninterrupted internet service for users globally as its demand never seems to decrease.
Switch’s data center design practices all the six considerations mentioned in this article and has obviously topped the charts.
Mechartes analyses conditions and environments within and outside data centers to design sustainable cooling and power infrastructure solutions. Learn more about Mechartes’ data center design validation services that can help you save on energy and capital.