Open access peer-reviewed chapter

Data Centre Infrastructure: Design and Performance

Written By

Yaseein Soubhi Hussein, Maen Alrashd, Ahmed Saeed Alabed and Saleh Alomar

Submitted: 05 December 2022 Reviewed: 13 January 2023 Published: 12 February 2023

DOI: 10.5772/intechopen.109998

From the Edited Volume

Latest Advances and New Visions of Ontology in Information Science

Edited by Morteza SaberiKamarposhti and Mahdi Sahlabadi

Chapter metrics overview

165 Chapter Downloads

View Full Metrics

Abstract

The tremendous growth of e-commerce requires an increase in the data centre capacity and reliability for appropriate quality of services. Optimisation of data centre design is considered to be within a green technology that shows great promise to decrease CO2 emission. However, a huge data centre requires huge power consumption due to higher capacity of racks that lead to more powerful cooling systems, power supply, protection and security. These make the data centre costly and not feasible for services. In this chapter, we will provide a tire 4 data centre design to be located in the optimal location of Malaysia, in Cyberjaya. The main purpose of this design is to provide e-commerce services, especially food delivery, with high quality of services and feasibility. All data centre components have been well designed to provide various services which include top-level security, colocation system, reliable data management and IT infrastructure management. Moreover, recommendation and justification have been provided to ensure that the proposed design outperforms compared to other data centres in terms of reliability, power effeminacy and storage capacity. In conclusion, analysing, synthesising and evaluating each component of the proposed data centre will be summarised.

Keywords

  • data centre
  • storage infrastructure
  • data centre infrastructure management (DCIM)
  • security
  • scalability

1. Introduction

Meza is one of the home-grown data centre companies, and it provides various services which include top-level security and reliable data management and IT infrastructure management. Meza is expected to be built in several data centres across Malaysia. A Malaysian food delivery application company has more than 5 million users of the services, and the number of users is increasing day by day. The current infrastructure is insufficient to handle the vast amount of data processing, and it might cause the users to face poor user experience due to longer response time from the server and slow process. Therefore, the company has appointed Meza to construct a data centre to cater to the continued growth of the company. The data centre will be required to process online food ordering and online payment, customer relationship management to manage the communication in one inbox and engage with their client.

This chapter will be proposing a data centre design with the essential components for this food delivery application company and analysing, synthesising and evaluating each component of the proposed data centre. Other data centre components, such as power usage effectiveness and efficiency, cooling system and protection, have been discussed in the chapter Data Centre Infrastructure Power Efficiency and Protection (Figure 1).

Figure 1.

APU data centre.

Advertisement

2. Analysis

2.1 Customer requirements

The first basic requirement that comes to mind when building a data centre is the customer requirements. The customers are an essential entity. This data centre proposal has been designed to accept enormous amount of traffic loads, which means that many customers can order at the same time without the need to face frustration which happens when a system crashes due to overload. Furthermore, a seamless online chat function has been proposed which functions from the time of the order till the order has been delivered. This feature will enable customers to be more confident to use this delivery system as they can raise any issues regardless of the payment, order and so on. This is possible due to the new proposed network infrastructure.

Moreover, due to its newly improved network infrastructure, orders can be grouped more efficiently and can be delivered quickly. Lastly, the data centre is designed to be transparent which will allow both the customers and the delivery guy to key in ratings, which will compel the individuals to be reputable in order to achieve perks and hence make the system more trustworthy.

2.2 Data centre requirements

Moving on, there are a few requirements for a data centre to be robust such as:

  • Availability/tier selection: to achieve high availability, Meza has decided to critically analyse between different types of data centre tiers. There are basically 4 types of tiers available. After comparing between the 4 types of tiers, the company has decided to go with tier 4 since the company is used for food delivery application and has a huge number of customers. As stated by [1], a tier 4 data centre has an uptime of 99.995% per year and has a ‘2 N + 1’ completely redundant infrastructure which serves the sole purpose of the food delivery application. It has an annual downtime of only 26.3 minutes per year when compared to 1.6 hours per year for tier 3.

    Furthermore, tier 4 has been chosen for Meza because of the failure-tolerant design. As expressed by [2], failure-tolerant design is an essential part of the many benefits offered by a tier 4 data centre. This allows unplanned failures to be maintained that would otherwise cause critical loads in the site’s infrastructure. Additionally, if any distribution or capacity component fails, the computer equipment of a tier 4 data centre shall not be affected. In order to avoid further disasters, the system will respond automatically. Moreover, there are also several distribution paths in a tier 4 data centre that can handle the computer equipment of the site at the same time. The IT equipment are all powered doubly and offer additional backup. Lastly, it is also supported by [3] that for mission-critical applications and systems, fault tolerance is especially crucial. This tier has the level of protection that is the most important, and tier 4 also provides support for electricity outage protection for 96 hours (Figures 2 and 3).

  • Scalability: The food-ordering application company has more than 5 million active users and is increasing with time. The planned data centre will also be able to offer continuous scalability and colocation facilities. This is the most critical aspect in constructing data centres, because expansion capability and the handling of additional data or customers are necessary which may impact the architecture of data centre in long run if scalability is not taken into account. Any future change in the data centre which requires more space, devices or other technical aspect must be effectively managed without affecting the key existing data centre elements.

  • Security: The data centre is required to store the client information and process online payment and the orders of their customers. The process which the data centre will be handling the most will be the payment process and the orders, in which the payment should be done in the most secure way. It is also critical that the proposed data centre has the highest data protection standards. In order to ensure both internal and external threats, the data security must include cyber security measures and physical security. The advantages of a high-security data centre ensure data integrity and keep trust between customers. As a data centre hosts information, software and facilities that companies use every day, organisations must ensure that they utilise adequate data centre protection. Lack of effective data centre protection may contribute to privacy abuse when confidential information regarding the business is leaked or compromised.

  • Manageability: Manageability in data centres is about the responsibilities and processes associated with IT infrastructure management. A survey conducted among 300 data centre professionals by Tintri found that 49% of the respondents identified their biggest concern as manageability [6]. As manageability is such a critical part of the data centre, with authors in [7] categorising and evaluating fog data management, Meza has planned the data centre with manageability as one of its focus areas. Modern data centres rely on automation to improve manageability. Meza, with years of experience, has analysed the use case extensively and decided the data centre will be using a Data Centre Infrastructure Management (DCIM) software (Figure 4).

Figure 2.

Some features of different data centre tiers [4].

Figure 3.

Some features of different data centre tiers [5].

Figure 4.

A commercial DCIM software developed by Intel [8].

According to [9], DCIM covers monitoring, measuring, managing and controlling data centre utilisation and energy consumption of all IT-related equipment and facility infrastructure components. These equipment and components include power distribution units, servers and network switches to name a few. A typical data centre has a lot of workload. The workload increases immensely depending on the size of the data centre. For the food delivery company with millions of users, the data centre will have a huge workload if managed manually, and it would be unrealistic and impossible to run effectively. DCIM does tasks which are otherwise performed by data centre personnel. An important feature of DCIM is the real-time [10] central dashboard which displays information about critical systems from sensors and equipment. Data centre personnel are more informed about the operations and are likely to predict the next outage and avoid it. In addition to this, DCIM can handle non day-to-day tasks such as management of change. Therefore, DCIM is a critical piece of software for improving data centre manageability. The use of DCIM by Meza in this data centre will have huge benefits, resulting in less downtime and making manageability more robust (Figure 5).

Figure 5.

Cabling contributes to large number of system outages [11].

Since cabling performance is a major factor in system outages, Meza will be using cabling from providers that ensure their cables can sustain higher performance. Meza hopes that using high-quality data centre fabric can reduce system outages and increase the overall manageability of the data centre.

  • Cost: All business organisations strive to put the best performance with the lowest possible cost. It is the interest of both Meza and the food delivery company to bring the cost down while meeting business requirements. Total Cost of Ownership (TCO) is an estimate which includes the building the data centre and operating it. For the food delivery company, it is necessary that the TCO of building and operating a data centre is lower compared to hosting their application on a public cloud such as Amazon Web Services (AWS). According to [12], the largest driver of cost is determined to be the unnecessary unabsorbed costs resulting from the oversizing of the infrastructure. Meza has decided to deploy an adaptable physical infrastructure system. An adaptable physical infrastructure system reduces the waste due to oversizing substantially. As a result, the total cost of ownership is reduced too (Figure 6).

Figure 6.

Charts showing waste due to oversizing between non-adaptable (a) and adaptable (b) approaches [13].

As shown in Figure 6, the room capacity design is non-adaptable (a), at the beginning, compared to adaptable physical infrastructure system (b), as the load increases.

In addition to this, Meza plans that with the use of Data Centre Infrastructure Management (DCIM) software, the operating costs can be reduced. One of the fundamental features of DCIM software is the use of automation across the board. Automation reduces manual labour with situational awareness. For example, resources such as energy can be increased during peak hours automatically instead of having maximum performance all day long regardless of the load. Moreover, use of DCIM software also allows data centre personnel to predict the life cycle of physical infrastructure equipment, so they can change the equipment before they become faulty without compromising additional equipment due to failure.

2.3 Environment

The data centre will be located at Cyberjaya, and it is a specialised Information Technology district in Kuala Lumpur, Malaysia. The location is around 30 minutes away from the Kuala Lumpur city centre as well as the Kuala Lumpur International Airport.

Geographically, Malaysia is a well-known, stable region where natural disaster risk like tsunamis and earthquakes is extremely low. Especially in Cyberjaya, the district is above sea level throughout the year, so there is nearly no chance for massive flood happening. Cyberjaya is the core of Multimedia Super Corridor (MSC Malaysia), with MSC status, it guarantees the world-class infrastructure for IT industry and 99.9% guaranteed reliability in advance telecommunication technologies. The rental rate is from MYR 2.50 per square foot [14].

The environment is highly secured, it has a state-of-the-art CCTV system integrated with Malaysia’s Emergency Response System and police personal monitoring the CCTV footage all the time with a quick response time for any emergency. These create a secure environment for the community (Figures 79).

Figure 7.

(NTT, ND).

Figure 8.

The environment of Cyberjaya [15].

Figure 9.

Illustrate the proposed data centre floor plan.

Advertisement

3. Data centre design

3.1 Data centre floor plan

3.1.1 Floor plan justification

The above data centre floor plan design consists of 8 unique components that are necessary for a data centre including a surveillance room for monitoring the physical security and to create daily reports and analytics. The components that have been included are:

  • Electrical supply: it is used to provide electricity for the entire data centre.

  • Cooling system: it has been placed to provide cooling for the server racks and to avoid overheating when the computing resources are in use.

  • Computing resources: they are responsible for handling all the processing powers for the database and to help with communications between the clients and the customers.

  • Sever racks: it is used for the space allocation for the computing resources.

  • Network infrastructure: it is responsible for a smooth communication between individuals as well as for faster and extra-secured payment [16].

  • Storage infrastructure: this is where all the information is stored like details about the restaurant menu, customer credentials and so on.

  • Fire detection system: in case of any overheating occurred by the computing resources or electricity overload, fire detection system will be able to detect it earlier and take necessary steps.

  • Fire supressing system: in the event of any components catching fire, this system will be responsible for supressing it.

Furthermore, the data centre has been equipped with an Uninterruptible Power Supply (UPS) backup battery and a diesel generator in case of a power failure in order to achieve a higher availability for the system. Lastly, state-of-the art closed circuit television (CCTV) has also been placed in the data centre cabinet to be monitored remotely by higher officials (Figure 10).

Figure 10.

Racks inside a data centre [17].

Advertisement

4. Data centre components

4.1 Racks

In a data centre, racks can be considered as the building blocks. Traditionally, racks were mostly used for stacking IT equipment and saving floor space. However, racks in data centres today play a vital role in mounting heavy IT equipment, providing an organised environment for power distribution, air flow distribution for better cooling performance and cable management among many features [18]. Data centres demand a rack infrastructure that can mount a variety of equipment such as servers and switches. Therefore, it is important that the rack infrastructure can meet the requirements while offering sustainable performance.

4.1.1 Equipment in racks

The major equipment inside the rack will be the compute servers, storage servers and networking equipment such as switches. Different racks will have different compositions of these equipment.

  • Compute servers

The main compute resources in a data centre are the servers. Most of the racks will be utilised for mounting rack servers for compute purposes. These servers are used for compute-intensive tasks such as processing and database hosting. These servers will be using enterprise-level processors such as Intel Xeon or AMD EPYC which have multiple physical cores providing high-level performance.

  • Storage servers

Similar to compute rack servers, storage servers are mounted in the racks. Storage servers have a high density of storage capacity such as hard disks and SSDs. The emphasis on processing power in storage servers compared to compute servers is less. Therefore, storage servers typically use much less RAM and less performant processors. More on storage infrastructure will be discussed in this proposal.

  • Switches

Switches act like a hub which connects different equipment such as servers in the rack with other servers or racks in the data centre. They are an integral part of the networking infrastructure.

4.1.2 Rack enclosures

Selecting a rack for a data centre requires consideration into some criteria such as dimension, design, capacity and material. According to [19], rack is available in three major types: open frame racks, rack enclosures or cabinets and wall-mount racks (Figure 11).

Figure 11.

42 U rack enclosure or cabinet [20].

Rack enclosures or cabinets are a rack with four posts, doors and panels on the side. Depending on the design and manufacturer, the side panels can be removed to offer maximum flexibility. Among the most distinctive features of rack enclosures are airflow management, security, cable management and power distribution. These types of rack are ideal for use cases where the rack needs to store heavier equipment, hotter equipment and higher wattages per rack [19]. Doors that are on the front and back of the rack are ventilated for better airflow. Additionally, doors provide some levels of security. Most rack enclosures come with doors that can be locked which provide an additional layer of security (rack-level). Rack enclosures have a means of providing dedicated power distribution units (PDU) for the rack. The PUDs in rack enclosures are installed at the back or on the side, so they provide power without congesting the space inside the rack.

The size of the rack depends on many attributes. Some of these include:

  • Width and depth of equipment used in rack

  • Total weight of the IT and non-lT equipment (load rating)

  • Number of cables entering the rack

  • Rack units (RU) occupied

Most equipment used in racks are standardised with a width of 482.6 mm or 19inches. This current standard of 19-inch was established by Electronic Industry Alliance (EIA) [18]. In racks, the usable vertical space is measured in rack units. A rack unit is equal to 1.75 inches in height. Although racks consisting of deeper equipment and higher cable densities drive the need for a bigger rack size, the most widely used rack dimension is 42 U tall, 600 mm wide and 1070 mm deep.

Depending on the equipment mounted inside the rack, the rack can be considered a server rack or a networking rack. In comparison with server racks, network racks are much wider as they need additional room for cabling.

4.1.3 Justification

Based on the three types of racks, rack enclosures or cabinets will be used across the data centre. Since the data centre is going to be a newly built, wall-mount racks can be avoided because there is enough floor space inside the data centre for the planned capacity. Compared to a wall-mount rack, the other two racks provide more racking of equipment for a given floor space. While open-frame racks offer a lot of features for a much lower cost than rack enclosures, the features such as better airflow control and better security are too important to be overlooked. Open-frame racks offer very little control over airflow. In addition to this, the use of side panels in rack enclosures prevents unrestricted hot air flowing inside the rack, heating up the equipment unnecessarily. According to [21], between 30 and 55% of a data centre’s energy consumption goes into powering its cooling and ventilation systems. It is important that the racks chosen for the data centre can lower the cost of overall cooling as much as possible. In general, low cost racks such as open-frame racks have a significant effect on how much time it takes to complete rack-based work due to inefficiencies in areas such as cable management or mounting [18]. In [22], a decision support model has been proposed to the use of liquid-based cooling to measure and assess the waste heat resource accessible from retrofit within the High Performance Computing (HPC) and data centre (DC) industry (Figure 12).

Figure 12.

Top-of-rack vs. end-of-row architecture [23].

As the data centre will be using top-of-rack switching, the use of networking racks will be limited. Top-of-rack switching architecture is considered for this data centre because it provides the benefit of better cabling, future-proofing with emerging standards and better support for multi-core servers by offering more bandwidth with low latency [24, 25]. Top-of-rack architecture avoids the number of cables going to the networking server considerably. Therefore, the size of racks in the data centre will be consistent.

Based on the consideration of attributes, the data centre will be using the standard 42 U tall, 600 mm wide and 1070 mm deep racks. Most servers mounted in the server will be 2 U. 2 U servers offer more advantages than a smaller 1 U server or oversized 5 U server. Due to limitation of physical size, 1 U server is affected by heating issues. While, 5 U servers are more powerful, they are more expensive and less cost-effective. Therefore, 2 U servers offer a compromise between performance and cooling [26]. When using standard equipment, the oversizing of data centre is not necessary. 42 U tall racks also provide several additional benefits [18]:

  • Cheaper than taller racks

  • No need for a ladder to reach all positions of the rack

  • Less likely to interfere with overhead equipment such as fire suppression sprinklers

In conclusion, 42 U rack enclosures provide better features and are more suitable to be used in this data centre.

4.2 Storage infrastructure

In modern data centres, storage is becoming a highly complex component with increasing demands to store more and more data. Storage infrastructure for a data centre includes architectures, hardware equipment such as hard disks, SSDs and so on. Storage infrastructure in a data centre is tightly coupled with the networking for accessibility and delivery. In today’s world, there are two challenges to high-performance storage systems: capacity and performance [27].

Capacity: Usage of computers, Internet of things (IoT) devices, mobile phones and other digital equipment has created a high demand for data storage. Data storage is increasing at a rapid pace every day. With the advancements in technologies such as image quality, average file sizes have risen considerably. As a data centre, the facility needs to have a storage infrastructure that has the capacity to meet these demands while offering the best performance possible.

Performance: Data centres need to focus on the storage performance regardless of the capacity requirements. It is vital for the storage infrastructure to be scalable and highly available. While storing hundreds of terabytes of data, unoptimised and poorly designed infrastructure could lower the performance of the overall data centre as data can be used in other areas like compute. The storage infrastructure in the data centre must be able to handle these requirements while overcoming the challenges faced. Traditionally, data centres use 3 popular storage solutions [27] (Figure 13).

Figure 13.

Storage area network [28].

4.2.1 Storage area network

Storage Area Network (SAN) is a dedicated network consisting of multiple storage devices. A SAN is a pool of block-level storage resources. SAN provides a higher level of management with the inclusion of multiple servers which manage data access and storage management [29]. Additionally, a SAN uses high-speed cabling and dedicated networking equipment such as switches. Modern SANs are based on fibre channel that can deliver high bandwidth and throughput with data speeds of up to 16GB per second. With the reductions in Solid State Drives (SSDs), SAN can consist of SSD arrays which offer much more I/O performance than Hard Disk Drives (HDDs). Although SAN is complex to deploy and manage, it is highly scalable and available. Since SAN runs on its own dedicated network, it does not face the issue of network-attached storage (NAS) solutions due to shared bandwidth and network congestion (Figure 14).

Figure 14.

SAN component layers [30].

A SAN consists of various components which can be grouped into 3 main categories [30]. These categories are Host components, Fabric components and Storage components.

  • Host components

    These components are located in the computer servers or any other type of server accessing the SAN. Compute servers (hosts) use a host-based adapter (HBA) which has a fabric port that enables communication between the server (host) and SAN switches.

  • Fabric components

    Fabric components include the switches, cables and communication protocols [30]. The switches used in SAN according to the SAN topology will be Fibre Channel (FC) switches. These switches will provide 64-128 ports per switch and have built-in fault tolerance. Since this SAN uses FC, the majority of the cables used in the SAN would be fibre optical cables. Fibre optical cables provide higher bandwidth and data speeds. In addition, the fabric components define the communication protocol. For this SAN, FC is used as the protocol, and based on that, a switched fabric topology is used.

  • Storage components

    The fundamental parts of any SAN are the storage components. Storage components are the storage arrays. Storage arrays contain storage processors which communicate with disk arrays. In this proposed data centre’s storage infrastructure, the SAN will use SSD disk arrays. SSDs are one of the fastest storage mediums available today (Figure 15).

Figure 15.

Core-edge SAN topology [31].

The SAN will use Core-Edge topology which is based on the switched fibre channel. The two most important traits of Core-Edge topology are the resiliency and performance that this topology provides. In this topology, two or more core switches are used to interconnect two or more edge switches. Edge switches can be the switches that connect with core switches from servers or disk arrays. In addition to this, the use of this topology in SAN will encourage a balance between usable ports and dedicated inter-switch communication [31].

4.2.2 Justification

Based on the comparisons made above, Meza’s new data centre will be using a Storage Area Network (SAN). The growing food delivery company’s active users are increasing, and they require a scalable storage solution. Therefore, DAS with no scalability cannot be chosen. While NAS is cheaper and easier to maintain, SAN offers better performance. For a large organisation and data centre, SAN is ideal. Another key factor is that SAN works with virtualisation [32]. Virtualisation is a popular technology not only used in data centres but also heavily used even today. Other benefits of SAN include improved storage utilisation, better data protection and recovery and elimination of network bottlenecks [33].

A key difference in how data is stored is that SAN uses block-level storage, while NAS uses file-level storage. The biggest advantage of block-level storage is that it offers better access and control privileges. This is critically important since the food delivery company already has 5 million users, and easier management of users’ files is a key business requirement.

Figure 16.

SSDs have higher read and write speeds over HDDs [34].

As for the SAN technology, Meza will be choosing the Fibre Channel (FC). The key factor in making this decision is that FC provides significantly better performance and reliability. For such a scale of growing 5 million active user base, performance and reliability are crucial. It is possible to build a storage network of thousands of nodes without affecting throughput and latency. In addition to this, the SAN will use arrays of SSDs instead of HDDs (Figure 16). SSDs provide significant increase in speeds, and the price difference between the two has narrowed over the past few years [34].

The topology for SAN infrastructure used is Core-Edge. According to [35], SAN designs should always use two isolated fabrics for high availability. Since this data centre is a tier 4 data centre, high availability and resiliency are crucial. One of the reasons why Core-Edge FC is selected is because point-to-point or FC-AL does not have high availability, as in if case one link fails, the entire storage network becomes unavailable. Finally, Core-Edge supports millions of nodes, offering high level of scalability [31]. Scalability in storage is imperative as it needs to continuously grow every day.

Advertisement

5. Conclusion

We can conclude by saying that the world of IT is constantly increasing, and this will never stop the demand for innovative and better solutions. There is no question that future confirmation of the solution and equipment chosen for this task is same. From security to smart execution, the planned data centre is carefully considered. Scalability, CO2 reduction, system resilience, sustainability, applying machine learning and other emerging technologies are important to be considered for the data centre design. Moreover, the colocation system allows clients to locate their data by renting a space in the data centre and choosing the equipment. The focus on this role should be incredibly strong if carried out and can make sure every requirement of the food ordering system is encountered.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Colocation America. Data Center Standards (Tiers I-IV). 2015. Available from: https://www.colocationamerica.com/data-center/tier-standards-overview.htm
  2. 2. CtrlS. Significance of Tier 4 Data Center. 2014. Available from: https://www.ctrls.in/blog/significance-tier-4-data-center/
  3. 3. Greengard S. Data Center Tiers: Formulating a Strategy. 2019. Available from: https://www.datamation.com/data-center/data-center-tiers.html
  4. 4. Impact. Tier IV Data Centers. 2009. Available from: https://www.impactmybiz.com/blog/blog-why-you-need-a-tier-iv-4-data-center/
  5. 5. WHOA.com. Tier IV Data Centers. 2017. Available from: https://www.whoa.com/data-centers/
  6. 6. DCNewsAsia. Manageability Top Concern for Data Center Professionals. 2016. Available from: https://datacenternews.asia/story/manageability-top-concern-data-center-professionals
  7. 7. Sadri AA, Rahmani AM, Saberikamarposhti M, Hosseinzadeh M. Fog data management: A vision, challenges, and future directions. Journal of Network and Computer Applications. 2021;174:1-24
  8. 8. Intel. Intel® Data Center Manager. 2020. Available from: https://www.intel.com/content/www/us/en/software/intel-dcm-product-detail.html
  9. 9. Gartner. Data Center Infrastructure Management (DCIM). 2020. Available from: https://www.gartner.com/en/information-technology/glossary/data-center-infrastructure-management-dcim
  10. 10. Javadzadeh G, Rahmani AM, Kamarposhti MS. Mathematical model for the scheduling of real-time applications in IoT using dew computing. The Journal of Supercomputing. 2022;78:7464-7488
  11. 11. CXtec. Just How Manageable is Your Data Center?. 2020. Available from: https://www.cxtec.com/resources/blog/just-how-manageable-is-your-data-center/
  12. 12. Rasmussen N. Determining Total Cost of Ownership for Data Center and Network Room Infrastructure. 2015. Available from: https://download.schneider-electric.com/files?p_File_Name=CMRP-5T9PQG_R4_EN.pdf
  13. 13. Rasmussen N. Avoiding Costs from Oversizing Data Center and Network Room Infrastructure. 2015. Available from: https://download.schneider-electric.com/files?p_File_Name=SADE-5TNNEP_R7_EN.pdf
  14. 14. Malaysia C. Available from: https://www.cyberjayamalaysia.com.my/community/overview
  15. 15. Richard. Essential Information about Cyberjaya - Malaysia’s Technology and Innovation Hub. 2019
  16. 16. Hussein Y, Alrashdan M. Secure payment with QR technology on university campus. Journal of Computer Science & Computational Mathematics. 2022;12:31-34
  17. 17. Facebook. Opening our Newest Data Center in Los Lunas, New Mexico. 2019. Available from: https://engineering.fb.com/data-center-engineering/los-lunas-data-center/
  18. 18. Pearl H, Wei Z. How to Choose an IT Rack. 2015. Available from: https://download.schneider-electric.com/files?p_Doc_Ref=SPD_VAVR-9G4MYQ_EN
  19. 19. Tripp Lite. Rack Basics: Everything You Need to Know Before You Equip Your Data Center. 2018. Available from: https://www.anixter.com/content/dam/Suppliers/Tripp%20Lite/White%20Papers/Rack-Basics-White-Paper-EN.pdf
  20. 20. Tripp Lite. 42U SmartRack Standard-Depth Rack Enclosure Cabinet with Doors, Side Panels & Shock Pallet Shipping. 2020. Available from: https://www.tripplite.com/42u-smartrack-standard-depth-rack-enclosure-cabinet-doors-side-panels-shock-pallet-shipping~SR42UBSP1
  21. 21. DataSpan. Data Center Cooling Costs. 2019. Available from: https://www.dataspan.com/blog/data-center-cooling-costs/
  22. 22. Ljungdahl V, Jradi M, Veje C. A decision support model for waste heat recovery systems design in data Center and high-performance computing clusters utilizing liquid cooling and phase change materials. Applied Thermal Engineering. 2022;201:1-10
  23. 23. Parés C. Top of the Rack vs End of The Row. 2019. Available from: https://blogs.salleurl.edu/en/top-rack-vs-end-row
  24. 24. Juniper Networks. Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking. 2016. Available: https://www.juniper.net/us/en/local/pdf/whitepapers/2000508-en.pdf
  25. 25. Hussein YS. Impact of applying channel estimation with different levels of DC-bias on the performance of visible light communication. Journal of Optoelectronics Laser. 2021;40
  26. 26. Thinkmate. 2U Rack Server. 2017. Available from: https://www.thinkmate.com/inside/articles/2u-rack-server
  27. 27. Scala Storage. Scala Storage Scale-Out Clustered Storage White Paper. 2018. Available from: http://www.scalastorage.com/pdf/White_Paper.pdf
  28. 28. Lee G. Storage Network. 2014. Available from: https://www.sciencedirect.com/topics/computer-science/storage-network
  29. 29. RedHat. What is Network-Attached Storage?. 2020. Available from: https://www.redhat.com/en/topics/data-storage/network-attached-storage
  30. 30. VMware. SAN Conceptual and Design Basics. 2016. Available from: https://www.vmware.com/pdf/esx_san_cfg_technote.pdf
  31. 31. Gençay E. Configuration Checking and Design Optimization of Storage Area Networks. 2009. Available from: https://www.researchgate.net/publication/314245428_Configuration_Checking_and_Design_Optimization_of_Storage_Area_Networks
  32. 32. Bauer R. What’s the Diff: NAS vs SAN. 2018. Available from: https://www.backblaze.com/blog/whats-the-diff-nas-vs-san/
  33. 33. Robb D. Storage Area Networks in the Enterprise. 2018. Available from: https://www.enterprisestorageforum.com/storage-networking/storage-area-networks-in-the-enterprise.html
  34. 34. Rubens P. SSD vs. HDD Speed. 2019. Available from: https://www.enterprisestorageforum.com/storage-hardware/ssd-vs-hdd-speed.html
  35. 35. Singh S. Core-Edge and Collapse-Core SAN Topologies. 2017. Available from: https://community.cisco.com/t5/data-center-documents/core-edge-and-collapse-core-san-topologies/ta-p/3149001

Written By

Yaseein Soubhi Hussein, Maen Alrashd, Ahmed Saeed Alabed and Saleh Alomar

Submitted: 05 December 2022 Reviewed: 13 January 2023 Published: 12 February 2023