0% found this document useful (0 votes)
43 views

Data Centre Infrastructure Design and Performance

Uploaded by

kibrom
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Data Centre Infrastructure Design and Performance

Uploaded by

kibrom
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

We are IntechOpen,

the world’s leading publisher of


Open Access books
Built by scientists, for scientists

6,200
Open access books available
169,000
International authors and editors
185M
Downloads

Our authors are among the

154
Countries delivered to
TOP 1%
most cited scientists
12.2%
Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index


in Web of Science™ Core Collection (BKCI)

Interested in publishing with us?


Contact [email protected]
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Chapter

Data Centre Infrastructure:


Design and Performance
Yaseein Soubhi Hussein, Maen Alrashd, Ahmed Saeed Alabed
and Saleh Alomar

Abstract

The tremendous growth of e-commerce requires an increase in the data centre


capacity and reliability for appropriate quality of services. Optimisation of data
centre design is considered to be within a green technology that shows great promise
to decrease CO2 emission. However, a huge data centre requires huge power consump-
tion due to higher capacity of racks that lead to more powerful cooling systems, power
supply, protection and security. These make the data centre costly and not feasible
for services. In this chapter, we will provide a tire 4 data centre design to be located in
the optimal location of Malaysia, in Cyberjaya. The main purpose of this design is to
provide e-commerce services, especially food delivery, with high quality of services
and feasibility. All data centre components have been well designed to provide various
services which include top-level security, colocation system, reliable data manage-
ment and IT infrastructure management. Moreover, recommendation and justifica-
tion have been provided to ensure that the proposed design outperforms compared
to other data centres in terms of reliability, power effeminacy and storage capacity. In
conclusion, analysing, synthesising and evaluating each component of the proposed
data centre will be summarised.

Keywords: data centre, storage infrastructure, data centre infrastructure management


(DCIM), security, scalability

1. Introduction

Meza is one of the home-grown data centre companies, and it provides various services
which include top-level security and reliable data management and IT infrastructure
management. Meza is expected to be built in several data centres across Malaysia. A
Malaysian food delivery application company has more than 5 million users of the services,
and the number of users is increasing day by day. The current infrastructure is insufficient
to handle the vast amount of data processing, and it might cause the users to face poor user
experience due to longer response time from the server and slow process. Therefore, the
company has appointed Meza to construct a data centre to cater to the continued growth of
the company. The data centre will be required to process online food ordering and online
payment, customer relationship management to manage the communication in one inbox
and engage with their client.

1
Latest Advances and New Visions of Ontology in Information Science

This chapter will be proposing a data centre design with the essential components
for this food delivery application company and analysing, synthesising and evaluat-
ing each component of the proposed data centre. Other data centre components,
such as power usage effectiveness and efficiency, cooling system and protection,
have been discussed in the chapter Data Centre Infrastructure Power Efficiency and
Protection (Figure 1).

2. Analysis

2.1 Customer requirements

The first basic requirement that comes to mind when building a data centre is
the customer requirements. The customers are an essential entity. This data centre
proposal has been designed to accept enormous amount of traffic loads, which means
that many customers can order at the same time without the need to face frustration
which happens when a system crashes due to overload. Furthermore, a seamless
online chat function has been proposed which functions from the time of the order
till the order has been delivered. This feature will enable customers to be more confi-
dent to use this delivery system as they can raise any issues regardless of the payment,
order and so on. This is possible due to the new proposed network infrastructure.
Moreover, due to its newly improved network infrastructure, orders can be
grouped more efficiently and can be delivered quickly. Lastly, the data centre is
designed to be transparent which will allow both the customers and the delivery guy
to key in ratings, which will compel the individuals to be reputable in order to achieve
perks and hence make the system more trustworthy.

Figure 1.
APU data centre.

2
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

2.2 Data centre requirements

Moving on, there are a few requirements for a data centre to be robust such as:

• Availability/tier selection: to achieve high availability, Meza has decided to criti-


cally analyse between different types of data centre tiers. There are basically 4
types of tiers available. After comparing between the 4 types of tiers, the com-
pany has decided to go with tier 4 since the company is used for food delivery
application and has a huge number of customers. As stated by [1], a tier 4 data
centre has an uptime of 99.995% per year and has a ‘2 N + 1’ completely redun-
dant infrastructure which serves the sole purpose of the food delivery applica-
tion. It has an annual downtime of only 26.3 minutes per year when compared to
1.6 hours per year for tier 3.
Furthermore, tier 4 has been chosen for Meza because of the failure-tolerant
design. As expressed by [2], failure-tolerant design is an essential part of the
many benefits offered by a tier 4 data centre. This allows unplanned failures to be
maintained that would otherwise cause critical loads in the site’s infrastructure.
Additionally, if any distribution or capacity component fails, the computer
equipment of a tier 4 data centre shall not be affected. In order to avoid further
disasters, the system will respond automatically. Moreover, there are also several
distribution paths in a tier 4 data centre that can handle the computer equipment
of the site at the same time. The IT equipment are all powered doubly and offer
additional backup. Lastly, it is also supported by [3] that for mission-critical
applications and systems, fault tolerance is especially crucial. This tier has the
level of protection that is the most important, and tier 4 also provides support for
electricity outage protection for 96 hours (Figures 2 and 3).

• Scalability: The food-ordering application company has more than 5 million active
users and is increasing with time. The planned data centre will also be able to offer

Figure 2.
Some features of different data centre tiers [4].

3
Latest Advances and New Visions of Ontology in Information Science

Figure 3.
Some features of different data centre tiers [5].

continuous scalability and colocation facilities. This is the most critical aspect in
constructing data centres, because expansion capability and the handling of addi-
tional data or customers are necessary which may impact the architecture of data
centre in long run if scalability is not taken into account. Any future change in the
data centre which requires more space, devices or other technical aspect must be
effectively managed without affecting the key existing data centre elements.

• Security: The data centre is required to store the client information and process
online payment and the orders of their customers. The process which the data
centre will be handling the most will be the payment process and the orders, in
which the payment should be done in the most secure way. It is also critical that
the proposed data centre has the highest data protection standards. In order to
ensure both internal and external threats, the data security must include cyber
security measures and physical security. The advantages of a high-security data
centre ensure data integrity and keep trust between customers. As a data centre
hosts information, software and facilities that companies use every day, organ-
isations must ensure that they utilise adequate data centre protection. Lack of
effective data centre protection may contribute to privacy abuse when confiden-
tial information regarding the business is leaked or compromised.

• Manageability: Manageability in data centres is about the responsibilities and


processes associated with IT infrastructure management. A survey conducted
among 300 data centre professionals by Tintri found that 49% of the respon-
dents identified their biggest concern as manageability [6]. As manageability
is such a critical part of the data centre, with authors in [7] categorising and
evaluating fog data management, Meza has planned the data centre with man-
ageability as one of its focus areas. Modern data centres rely on automation
to improve manageability. Meza, with years of experience, has analysed the
use case extensively and decided the data centre will be using a Data Centre
Infrastructure Management (DCIM) software (Figure 4).

According to [9], DCIM covers monitoring, measuring, managing and control-


ling data centre utilisation and energy consumption of all IT-related equipment and
facility infrastructure components. These equipment and components include power
4
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

Figure 4.
A commercial DCIM software developed by Intel [8].

distribution units, servers and network switches to name a few. A typical data centre has
a lot of workload. The workload increases immensely depending on the size of the data
centre. For the food delivery company with millions of users, the data centre will have a
huge workload if managed manually, and it would be unrealistic and impossible to run
effectively. DCIM does tasks which are otherwise performed by data centre personnel.
An important feature of DCIM is the real-time [10] central dashboard which displays
information about critical systems from sensors and equipment. Data centre personnel
are more informed about the operations and are likely to predict the next outage and
avoid it. In addition to this, DCIM can handle non day-to-day tasks such as management
of change. Therefore, DCIM is a critical piece of software for improving data centre
manageability. The use of DCIM by Meza in this data centre will have huge benefits,
resulting in less downtime and making manageability more robust (Figure 5).
Since cabling performance is a major factor in system outages, Meza will be using
cabling from providers that ensure their cables can sustain higher performance.
Meza hopes that using high-quality data centre fabric can reduce system outages and
increase the overall manageability of the data centre.

• Cost: All business organisations strive to put the best performance with the
lowest possible cost. It is the interest of both Meza and the food delivery com-
pany to bring the cost down while meeting business requirements. Total Cost
of Ownership (TCO) is an estimate which includes the building the data centre
and operating it. For the food delivery company, it is necessary that the TCO of
building and operating a data centre is lower compared to hosting their applica-
tion on a public cloud such as Amazon Web Services (AWS). According to [12],
the largest driver of cost is determined to be the unnecessary unabsorbed costs
5
Latest Advances and New Visions of Ontology in Information Science

resulting from the oversizing of the infrastructure. Meza has decided to deploy
an adaptable physical infrastructure system. An adaptable physical infrastruc-
ture system reduces the waste due to oversizing substantially. As a result, the
total cost of ownership is reduced too (Figure 6).

As shown in Figure 6, the room capacity design is non-adaptable (a), at the


beginning, compared to adaptable physical infrastructure system (b), as the load
increases.
In addition to this, Meza plans that with the use of Data Centre Infrastructure
Management (DCIM) software, the operating costs can be reduced. One of the
fundamental features of DCIM software is the use of automation across the board.
Automation reduces manual labour with situational awareness. For example,
resources such as energy can be increased during peak hours automatically instead of
having maximum performance all day long regardless of the load. Moreover, use of
DCIM software also allows data centre personnel to predict the life cycle of physical

Figure 5.
Cabling contributes to large number of system outages [11].

Figure 6.
Charts showing waste due to oversizing between non-adaptable (a) and adaptable (b) approaches [13].

6
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

infrastructure equipment, so they can change the equipment before they become
faulty without compromising additional equipment due to failure.

2.3 Environment

The data centre will be located at Cyberjaya, and it is a specialised Information


Technology district in Kuala Lumpur, Malaysia. The location is around 30 minutes away
from the Kuala Lumpur city centre as well as the Kuala Lumpur International Airport.
Geographically, Malaysia is a well-known, stable region where natural disaster risk
like tsunamis and earthquakes is extremely low. Especially in Cyberjaya, the district
is above sea level throughout the year, so there is nearly no chance for massive flood
happening. Cyberjaya is the core of Multimedia Super Corridor (MSC Malaysia), with
MSC status, it guarantees the world-class infrastructure for IT industry and 99.9%
guaranteed reliability in advance telecommunication technologies. The rental rate is
from MYR 2.50 per square foot [14].
The environment is highly secured, it has a state-of-the-art CCTV system inte-
grated with Malaysia’s Emergency Response System and police personal monitoring
the CCTV footage all the time with a quick response time for any emergency. These
create a secure environment for the community (Figures 7-9).

Figure 7.
(NTT, ND).

Figure 8.
The environment of Cyberjaya [15].

7
Latest Advances and New Visions of Ontology in Information Science

Figure 9.
Illustrate the proposed data centre floor plan.

3. Data centre design

3.1 Data centre floor plan

3.2 Floor plan justification

The above data centre floor plan design consists of 8 unique components that are
necessary for a data centre including a surveillance room for monitoring the physical
security and to create daily reports and analytics. The components that have been
included are:

• Electrical supply: it is used to provide electricity for the entire data centre.

• Cooling system: it has been placed to provide cooling for the server racks and to
avoid overheating when the computing resources are in use.

• Computing resources: they are responsible for handling all the processing powers
for the database and to help with communications between the clients and the
customers.

• Sever racks: it is used for the space allocation for the computing resources.

8
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

Figure 10.
Racks inside a data centre [17].

• Network infrastructure: it is responsible for a smooth communication between


individuals as well as for faster and extra-secured payment [16].

• Storage infrastructure: this is where all the information is stored like details
about the restaurant menu, customer credentials and so on.

• Fire detection system: in case of any overheating occurred by the computing


resources or electricity overload, fire detection system will be able to detect it
earlier and take necessary steps.

• Fire supressing system: in the event of any components catching fire, this system
will be responsible for supressing it.

Furthermore, the data centre has been equipped with an Uninterruptible Power
Supply (UPS) backup battery and a diesel generator in case of a power failure in order
to achieve a higher availability for the system. Lastly, state-of-the art closed circuit
television (CCTV) has also been placed in the data centre cabinet to be monitored
remotely by higher officials (Figure 10).

4. Data centre components

4.1 Racks

In a data centre, racks can be considered as the building blocks. Traditionally,


racks were mostly used for stacking IT equipment and saving floor space. However,

9
Latest Advances and New Visions of Ontology in Information Science

racks in data centres today play a vital role in mounting heavy IT equipment, provid-
ing an organised environment for power distribution, air flow distribution for better
cooling performance and cable management among many features [18]. Data centres
demand a rack infrastructure that can mount a variety of equipment such as servers
and switches. Therefore, it is important that the rack infrastructure can meet the
requirements while offering sustainable performance.

4.1.1 Equipment in racks

The major equipment inside the rack will be the compute servers, storage serv-
ers and networking equipment such as switches. Different racks will have different
compositions of these equipment.

• Compute servers

The main compute resources in a data centre are the servers. Most of the racks will
be utilised for mounting rack servers for compute purposes. These servers are used
for compute-intensive tasks such as processing and database hosting. These servers
will be using enterprise-level processors such as Intel Xeon or AMD EPYC which have
multiple physical cores providing high-level performance.

• Storage servers

Similar to compute rack servers, storage servers are mounted in the racks. Storage
servers have a high density of storage capacity such as hard disks and SSDs. The
emphasis on processing power in storage servers compared to compute servers is less.
Therefore, storage servers typically use much less RAM and less performant proces-
sors. More on storage infrastructure will be discussed in this proposal.

• Switches

Switches act like a hub which connects different equipment such as servers in the
rack with other servers or racks in the data centre. They are an integral part of the
networking infrastructure.

4.1.2 Rack enclosures

Selecting a rack for a data centre requires consideration into some criteria such
as dimension, design, capacity and material. According to [19], rack is available in
three major types: open frame racks, rack enclosures or cabinets and wall-mount
racks (Figure 11).
Rack enclosures or cabinets are a rack with four posts, doors and panels on the
side. Depending on the design and manufacturer, the side panels can be removed to
offer maximum flexibility. Among the most distinctive features of rack enclosures
are airflow management, security, cable management and power distribution. These
types of rack are ideal for use cases where the rack needs to store heavier equipment,
hotter equipment and higher wattages per rack [19]. Doors that are on the front and
back of the rack are ventilated for better airflow. Additionally, doors provide some
levels of security. Most rack enclosures come with doors that can be locked which
provide an additional layer of security (rack-level). Rack enclosures have a means of
10
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

Figure 11.
42 U rack enclosure or cabinet [20].

providing dedicated power distribution units (PDU) for the rack. The PUDs in rack
enclosures are installed at the back or on the side, so they provide power without
congesting the space inside the rack.
The size of the rack depends on many attributes. Some of these include:

• Width and depth of equipment used in rack

• Total weight of the IT and non-lT equipment (load rating)

• Number of cables entering the rack

• Rack units (RU) occupied

Most equipment used in racks are standardised with a width of 482.6 mm or


19inches. This current standard of 19-inch was established by Electronic Industry
Alliance (EIA) [18]. In racks, the usable vertical space is measured in rack units. A
rack unit is equal to 1.75 inches in height. Although racks consisting of deeper equip-
ment and higher cable densities drive the need for a bigger rack size, the most widely
used rack dimension is 42 U tall, 600 mm wide and 1070 mm deep.
Depending on the equipment mounted inside the rack, the rack can be considered
a server rack or a networking rack. In comparison with server racks, network racks
are much wider as they need additional room for cabling.

4.1.3 Justification

Based on the three types of racks, rack enclosures or cabinets will be used across
the data centre. Since the data centre is going to be a newly built, wall-mount racks
11
Latest Advances and New Visions of Ontology in Information Science

Figure 12.
Top-of-rack vs. end-of-row architecture [23].

can be avoided because there is enough floor space inside the data centre for the
planned capacity. Compared to a wall-mount rack, the other two racks provide more
racking of equipment for a given floor space. While open-frame racks offer a lot of
features for a much lower cost than rack enclosures, the features such as better airflow
control and better security are too important to be overlooked. Open-frame racks
offer very little control over airflow. In addition to this, the use of side panels in rack
enclosures prevents unrestricted hot air flowing inside the rack, heating up the equip-
ment unnecessarily. According to [21], between 30 and 55% of a data centre’s energy
consumption goes into powering its cooling and ventilation systems. It is important
that the racks chosen for the data centre can lower the cost of overall cooling as much
as possible. In general, low cost racks such as open-frame racks have a significant
effect on how much time it takes to complete rack-based work due to inefficiencies in
areas such as cable management or mounting [18]. In [22], a decision support model
has been proposed to the use of liquid-based cooling to measure and assess the waste
heat resource accessible from retrofit within the High Performance Computing (HPC)
and data centre (DC) industry (Figure 12).
As the data centre will be using top-of-rack switching, the use of networking racks
will be limited. Top-of-rack switching architecture is considered for this data centre
because it provides the benefit of better cabling, future-proofing with emerging
standards and better support for multi-core servers by offering more bandwidth with
low latency [24, 25]. Top-of-rack architecture avoids the number of cables going to the
networking server considerably. Therefore, the size of racks in the data centre will be
consistent.
Based on the consideration of attributes, the data centre will be using the stan-
dard 42 U tall, 600 mm wide and 1070 mm deep racks. Most servers mounted in
the server will be 2 U. 2 U servers offer more advantages than a smaller 1 U server
or oversized 5 U server. Due to limitation of physical size, 1 U server is affected by
heating issues. While, 5 U servers are more powerful, they are more expensive and
less cost-effective. Therefore, 2 U servers offer a compromise between performance
and cooling [26]. When using standard equipment, the oversizing of data centre is
not necessary. 42 U tall racks also provide several additional benefits [18]:

12
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

• Cheaper than taller racks

• No need for a ladder to reach all positions of the rack

• Less likely to interfere with overhead equipment such as fire suppression


sprinklers

In conclusion, 42 U rack enclosures provide better features and are more suitable
to be used in this data centre.

4.2 Storage infrastructure

In modern data centres, storage is becoming a highly complex component with


increasing demands to store more and more data. Storage infrastructure for a data
centre includes architectures, hardware equipment such as hard disks, SSDs and so
on. Storage infrastructure in a data centre is tightly coupled with the networking for
accessibility and delivery. In today’s world, there are two challenges to high-perfor-
mance storage systems: capacity and performance [27].
Capacity: Usage of computers, Internet of things (IoT) devices, mobile phones and
other digital equipment has created a high demand for data storage. Data storage is
increasing at a rapid pace every day. With the advancements in technologies such as
image quality, average file sizes have risen considerably. As a data centre, the facility
needs to have a storage infrastructure that has the capacity to meet these demands
while offering the best performance possible.
Performance: Data centres need to focus on the storage performance regardless of
the capacity requirements. It is vital for the storage infrastructure to be scalable and
highly available. While storing hundreds of terabytes of data, unoptimised and poorly

Figure 13.
Storage area network [28].

13
Latest Advances and New Visions of Ontology in Information Science

designed infrastructure could lower the performance of the overall data centre as data
can be used in other areas like compute. The storage infrastructure in the data centre
must be able to handle these requirements while overcoming the challenges faced.
Traditionally, data centres use 3 popular storage solutions [27] (Figure 13).

4.2.1 Storage area network

Storage Area Network (SAN) is a dedicated network consisting of multiple storage


devices. A SAN is a pool of block-level storage resources. SAN provides a higher level
of management with the inclusion of multiple servers which manage data access and
storage management [29]. Additionally, a SAN uses high-speed cabling and dedicated
networking equipment such as switches. Modern SANs are based on fibre channel that
can deliver high bandwidth and throughput with data speeds of up to 16GB per second.
With the reductions in Solid State Drives (SSDs), SAN can consist of SSD arrays which
offer much more I/O performance than Hard Disk Drives (HDDs). Although SAN
is complex to deploy and manage, it is highly scalable and available. Since SAN runs
on its own dedicated network, it does not face the issue of network-attached storage
(NAS) solutions due to shared bandwidth and network congestion (Figure 14).
A SAN consists of various components which can be grouped into 3 main categories
[30]. These categories are Host components, Fabric components and Storage components.

• Host components

These components are located in the computer servers or any other type of server
accessing the SAN. Compute servers (hosts) use a host-based adapter (HBA) which has a
fabric port that enables communication between the server (host) and SAN switches.

Figure 14.
SAN component layers [30].

14
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

• Fabric components

Fabric components include the switches, cables and communication proto-


cols [30]. The switches used in SAN according to the SAN topology will be Fibre
Channel (FC) switches. These switches will provide 64-128 ports per switch and
have built-in fault tolerance. Since this SAN uses FC, the majority of the cables
used in the SAN would be fibre optical cables. Fibre optical cables provide higher
bandwidth and data speeds. In addition, the fabric components define the com-
munication protocol. For this SAN, FC is used as the protocol, and based on that, a
switched fabric topology is used.

• Storage components

The fundamental parts of any SAN are the storage components. Storage compo-
nents are the storage arrays. Storage arrays contain storage processors which com-
municate with disk arrays. In this proposed data centre’s storage infrastructure, the
SAN will use SSD disk arrays. SSDs are one of the fastest storage mediums available
today (Figure 15).
The SAN will use Core-Edge topology which is based on the switched fibre
channel. The two most important traits of Core-Edge topology are the resiliency and
performance that this topology provides. In this topology, two or more core switches
are used to interconnect two or more edge switches. Edge switches can be the switches
that connect with core switches from servers or disk arrays. In addition to this, the use
of this topology in SAN will encourage a balance between usable ports and dedicated
inter-switch communication [31].

Figure 15.
Core-edge SAN topology [31].

15
Latest Advances and New Visions of Ontology in Information Science

4.2.2 Justification

Based on the comparisons made above, Meza’s new data centre will be using a
Storage Area Network (SAN). The growing food delivery company’s active users
are increasing, and they require a scalable storage solution. Therefore, DAS with no
scalability cannot be chosen. While NAS is cheaper and easier to maintain, SAN offers
better performance. For a large organisation and data centre, SAN is ideal. Another
key factor is that SAN works with virtualisation [32]. Virtualisation is a popular tech-
nology not only used in data centres but also heavily used even today. Other benefits
of SAN include improved storage utilisation, better data protection and recovery and
elimination of network bottlenecks [33].
A key difference in how data is stored is that SAN uses block-level storage, while
NAS uses file-level storage. The biggest advantage of block-level storage is that it
offers better access and control privileges. This is critically important since the food
delivery company already has 5 million users, and easier management of users’ files is
a key business requirement.
As for the SAN technology, Meza will be choosing the Fibre Channel (FC).
The key factor in making this decision is that FC provides significantly better
performance and reliability. For such a scale of growing 5 million active user base,
performance and reliability are crucial. It is possible to build a storage network
of thousands of nodes without affecting throughput and latency. In addition to
this, the SAN will use arrays of SSDs instead of HDDs (Figure 16). SSDs provide
significant increase in speeds, and the price difference between the two has nar-
rowed over the past few years [34].
The topology for SAN infrastructure used is Core-Edge. According to [35], SAN
designs should always use two isolated fabrics for high availability. Since this data
centre is a tier 4 data centre, high availability and resiliency are crucial. One of the
reasons why Core-Edge FC is selected is because point-to-point or FC-AL does not
have high availability, as in if case one link fails, the entire storage network becomes

Figure 16.
SSDs have higher read and write speeds over HDDs [34].

16
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

unavailable. Finally, Core-Edge supports millions of nodes, offering high level of


scalability [31]. Scalability in storage is imperative as it needs to continuously grow
every day.

5. Conclusion

We can conclude by saying that the world of IT is constantly increasing, and this
will never stop the demand for innovative and better solutions. There is no question
that future confirmation of the solution and equipment chosen for this task is same.
From security to smart execution, the planned data centre is carefully considered.
Scalability, CO2 reduction, system resilience, sustainability, applying machine
learning and other emerging technologies are important to be considered for the data
centre design. Moreover, the colocation system allows clients to locate their data by
renting a space in the data centre and choosing the equipment. The focus on this role
should be incredibly strong if carried out and can make sure every requirement of the
food ordering system is encountered.

Conflict of interest

The authors declare no conflict of interest.

Author details

Yaseein Soubhi Hussein1*, Maen Alrashd2,Ahmed Saeed Alabed1 and Saleh Alomar2

1 Computer Science and Information Systems Department, Ahmed Bin Mohammed


Military College, Qatar

2 Faculty of Science and Information Technology, Jadara University, Irbid, Jordan

*Address all correspondence to: [email protected]

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
17
Latest Advances and New Visions of Ontology in Information Science

References

[1] Colocation America. Data Center [10] Javadzadeh G, Rahmani AM,


Standards (Tiers I-IV). 2015. Available Kamarposhti MS. Mathematical model
from: https://www.colocationamerica. for the scheduling of real-time
com/data-center/tier-standards- applications in IoT using dew computing.
overview.htm The Journal of Supercomputing.
2022;78:7464-7488
[2] CtrlS. Significance of Tier 4 Data Center.
2014. Available from: https://www.ctrls.in/ [11] CXtec. Just How Manageable is Your
blog/significance-tier-4-data-center/ Data Center?. 2020. Available from:
https://www.cxtec.com/resources/blog/
[3] Greengard S. Data Center Tiers: just-how-manageable-is-your-data-
Formulating a Strategy. 2019. Available center/
from: https://www.datamation.com/
[12] Rasmussen N. Determining Total
data-center/data-center-tiers.html
Cost of Ownership for Data Center
and Network Room Infrastructure.
[4] Impact. Tier IV Data Centers.
2015. Available from: https://download.
2009. Available from: https://www.
schneider-electric.com/files?p_File_
impactmybiz.com/blog/blog-why-you-
Name=CMRP-5T9PQG_R4_EN.pdf
need-a-tier-iv-4-data-center/
[13] Rasmussen N. Avoiding Costs from
[5] WHOA.com. Tier IV Data Centers. Oversizing Data Center and Network
2017. Available from: https://www.whoa. Room Infrastructure. 2015. Available
com/data-centers/ from: https://download.schneider-
electric.com/files?p_File_Name=SADE-
[6] DCNewsAsia. Manageability Top 5TNNEP_R7_EN.pdf
Concern for Data Center Professionals.
2016. Available from: https:// [14] Malaysia C. Available from: https://
datacenternews.asia/story/manageability- www.cyberjayamalaysia.com.my/
top-concern-data-center-professionals community/overview

[7] Sadri AA, Rahmani AM, [15] Richard. Essential Information about
Saberikamarposhti M, Hosseinzadeh M. Cyberjaya - Malaysia’s Technology and
Fog data management: A vision, Innovation Hub. 2019
challenges, and future directions. Journal
[16] Hussein Y, Alrashdan M. Secure
of Network and Computer Applications.
payment with QR technology on
2021;174:1-24
university campus. Journal of Computer
Science & Computational Mathematics.
[8] Intel. Intel® Data Center Manager.
2022;12:31-34
2020. Available from: https://www.intel.
com/content/www/us/en/software/ [17] Facebook. Opening our Newest Data
intel-dcm-product-detail.html Center in Los Lunas, New Mexico. 2019.
Available from: https://engineering.
[9] Gartner. Data Center Infrastructure fb.com/data-center-engineering/
Management (DCIM). 2020. Available los-lunas-data-center/
from: https://www.gartner.com/en/
information-technology/glossary/data- [18] Pearl H, Wei Z. How to Choose an
center-infrastructure-management-dcim IT Rack. 2015. Available from: https://
18
Data Centre Infrastructure: Design and Performance
DOI: http://dx.doi.org/10.5772/intechopen.109998

download.schneider-electric.com/files?p_ [27] Scala Storage. Scala Storage Scale-


Doc_Ref=SPD_VAVR-9G4MYQ _EN Out Clustered Storage White Paper. 2018.
Available from: http://www.scalastorage.
[19] Tripp Lite. Rack Basics: Everything com/pdf/White_Paper.pdf
You Need to Know Before You Equip
Your Data Center. 2018. Available from: [28] Lee G. Storage Network.
https://www.anixter.com/content/dam/ 2014. Available from: https://
Suppliers/Tripp%20Lite/White%20 www.sciencedirect.com/topics/
Papers/Rack-Basics-White-Paper-EN.pdf computer-science/storage-network

[20] Tripp Lite. 42U SmartRack Standard- [29] RedHat. What is Network-Attached
Depth Rack Enclosure Cabinet with Storage?. 2020. Available from: https://
Doors, Side Panels & Shock Pallet www.redhat.com/en/topics/data-storage/
Shipping. 2020. Available from: https:// network-attached-storage
www.tripplite.com/42u-smartrack-
standard-depth-rack-enclosure- [30] VMware. SAN Conceptual and
cabinet-doors-side-panels-shock-pallet- Design Basics. 2016. Available from:
shipping~SR42UBSP1 https://www.vmware.com/pdf/esx_san_
[21] DataSpan. Data Center Cooling
cfg_technote.pdf
Costs. 2019. Available from:
[31] Gençay E. Configuration Checking
https://www.dataspan.com/blog/
data-center-cooling-costs/ and Design Optimization of Storage
Area Networks. 2009. Available
[22] Ljungdahl V, Jradi M, Veje C. A from: https://www.researchgate.net/
decision support model for waste heat publication/314245428_Configuration_
recovery systems design in data Center Checking_and_Design_Optimization_of_
and high-performance computing Storage_Area_Networks
clusters utilizing liquid cooling and
phase change materials. Applied Thermal [32] Bauer R. What’s the Diff: NAS
Engineering. 2022;201:1-10 vs SAN. 2018. Available from:
https://www.backblaze.com/blog/
[23] Parés C. Top of the Rack vs whats-the-diff-nas-vs-san/
End of The Row. 2019. Available
from: https://blogs.salleurl.edu/en/ [33] Robb D. Storage Area Networks in
top-rack-vs-end-row the Enterprise. 2018. Available from:
https://www.enterprisestorageforum.
[24] Juniper Networks. Next Steps
com/storage-networking/storage-area-
Toward 10 Gigabit Ethernet Top-of-Rack
networks-in-the-enterprise.html
Networking. 2016. Available: https://
www.juniper.net/us/en/local/pdf/
[34] Rubens P. SSD vs. HDD Speed.
whitepapers/2000508-en.pdf
2019. Available from: https://www.
[25] Hussein YS. Impact of applying enterprisestorageforum.com/storage-
channel estimation with different levels hardware/ssd-vs-hdd-speed.html
of DC-bias on the performance of
visible light communication. Journal of [35] Singh S. Core-Edge and Collapse-
Optoelectronics Laser. 2021;40 Core SAN Topologies. 2017. Available
from: https://community.cisco.com/
[26] Thinkmate. 2U Rack Server. 2017. t5/data-center-documents/core-edge-
Available from: https://www.thinkmate. and-collapse-core-san-topologies/
com/inside/articles/2u-rack-server ta-p/3149001
19

You might also like