0% found this document useful (0 votes)
60 views68 pages

DevSecPerfOps Pipeline Reference Application 1

This graduation project report details the design and implementation of a DevSecPerfOps pipeline for a reference application as part of obtaining an Applied Computer Engineering degree. The project aims to automate manual tasks through a CI/CD pipeline, ensuring security and performance while deploying the application and enabling continuous monitoring. Key technologies utilized include Docker, Kubernetes, Gitlab CI, and various monitoring tools.

Uploaded by

3amgadourydour
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views68 pages

DevSecPerfOps Pipeline Reference Application 1

This graduation project report details the design and implementation of a DevSecPerfOps pipeline for a reference application as part of obtaining an Applied Computer Engineering degree. The project aims to automate manual tasks through a CI/CD pipeline, ensuring security and performance while deploying the application and enabling continuous monitoring. Key technologies utilized include Docker, Kubernetes, Gitlab CI, and various monitoring tools.

Uploaded by

3amgadourydour
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Réf : A.

U : 2021-2022

UNIVERSITY OF SOUSSE

NATIONAL ENGINEERING SCHOOL OF SOUSSE

Graduation project report


Presented in the purpose of obtaining:

Applied Computer Engineering Degree


Speciality: INTELLIGENT SYSTEMS ENGINEERING

Design and Implementation of a DevSecPerfOps Pipeline for


a Reference Application

Realized by:
Khalil Jedda

Undertaken in:

Presented on the 23/06/2022 in the presence of the jury:

Chair : Mr. Tarek Aroui


Examiner : Pr. Med Lassad lammari
Company supervisor : Mr. Bassem Ben Khalifa
Pedagogical supervisor : Pr. Jamel Bel Hadj Tahar
Signatures of the supervisors

Pr. Jamel Bel Hadj Tahar

Mr. Bassem Ben Khalifa

i
Dedication

I dedicate this modest work to all those who have contributed to the accomplishment of
this realization.
My precious father Who has always supported me throughout the years of my studies. I
hope this modest work be the testimony of my gratitude.
My precious mother The person who was always there for me, endured a lot of struggles
for me, for all the love and tenderness she had given me, for all the values she taught me.
My dear sisters and my brothers
Who have always known how to bring pride and joy to our whole family.
.

ii
Acknowledgements

Glorified is Allah and praised is He. Glorified is Allah the most great. Life is blissful,
Praise to Allah for I had the opportunity to be a better version of myself and for I found
guidance through life. Praise for Allah for all the countless blessings in every step of life.

First, I would like to express my warmest thanks and gratitude to all those who have given
me the opportunity of realizing this end of studies project. I would like to express my special
gratitude to my supervisors, Professor Mr Bassem BEN KHALIFA, whose knowledge,
guidance and encourage, helped me to achieve my project. Thank you for your time and
patience during this internship.

Secondly, I was fortunate to work with my teacher and academic supervisor the Professor
Jamal Bel Hadj Taher that I would like to express my deepest gratitude and appreciation
for his follow up and for hr enormous support. His feed backs and support helped me in
overcoming many difficulties.

Finally, I warmly thank the members of the jury who did me the honor of accepting to
evaluate this work, all my teachers at ENISo and all the people who supported me until the
end.

iii
Abstract

This project was carried out as part of the end-of-studies project aimed at obtaining the
engineering degree in applied computer science. In this context, the main idea is to develop a
CI/CD pipeline to automate manual tasks and facilitate the development process, as well as
deploying a reference application and setting up a continuous monitoring system.
Indeed, this pipeline automates the integration, construction and testing of the development
application, then deploys it in a production environment while guaranteeing a well-defined
level of security and performance. In addition, the application is a real-time two-level monitor
(Performance and Security) for the administrator.
DevOps, CI/CD, Performance, Security, Monitoring, Docker, Kubernetes, Gitlab CI,
Keptn, Prometheus, Grafana, Jmeter, snyk.

iv
Résumé

Ce projet est a été réalisé dans le cadre du projet de fin d’études visant l’obtention du
diplôme d’ingénieur en informatique appliquée. Dans ce contexte, l’idée principale est de
développer un pipeline CI/CD pour automatiser les tâches manuelles et faciliter le processus
de développement, ainsi que le déploiement d’une application de référence et mettre en place
un système de surveillance continue.
En effet, ce pipeline automatise l’intégration, la construction et les tests de l’application
de développement, puis la déploie dans un environnement de production tout en garantissant
un niveau de sécurité et de performance bien défini. De plus, l’application est monitorée à. à
deux niveaux (Performance et Sécurité) en temps réel pour l’administrateur.
Mots-clés: DevOps, CI/CD, Performance, Security, Monitoring, Docker, Kubernetes,
Gitlab CI, Keptn, Prometheus, Grafana, Jmeter, snyk

v
Contents

Abstract iv

Résumé v

Acronym xi

General Introduction 1

1 General project Context 3


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Presentation of the host organization . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Presentation of Altersis Performance . . . . . . . . . . . . . . . . . 3
1.2.2 Sectors of activity . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Subsidiaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Altersis Perfomance Values . . . . . . . . . . . . . . . . . . . . . 5
1.3 Project presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Project context . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Analysis of the existing solution . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Work methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.1 Agile methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6.2 Scrum methodology . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Requirements specification 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 System’s Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Non-functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 System of Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Needs Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6.1 Configuration subsystem . . . . . . . . . . . . . . . . . . . . . . . 13
2.6.2 Continuous integration subsystem . . . . . . . . . . . . . . . . . . 15
Contents vii

2.6.3 Continuous deployment subsystem . . . . . . . . . . . . . . . . . . 18


2.6.4 Monitoring subsystem . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3 Architecture and Technologies 21


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Global concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.1 DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.3 Continuous integration . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.4 Continuous delivery and Continuous deployment . . . . . . . . . . 23
3.2.5 Multi-Stage Delivery . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.6 Continuous monitoring . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Physical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 Virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.2 Kubernetes architecture . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Architecture proposed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Technologies and work environment . . . . . . . . . . . . . . . . . . . . . 30
3.5.1 Choice of source code tool management . . . . . . . . . . . . . . . 30
3.5.2 Choice of model representation language . . . . . . . . . . . . . . 30
3.5.3 Choice of CI tools . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5.4 Choice of Continuous Delivery tool . . . . . . . . . . . . . . . . . 31
3.5.5 Istio tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.6 Docker tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.7 Helm tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5.8 Choice of the orchestration system . . . . . . . . . . . . . . . . . . 34
3.5.9 Kubernetes manage tool . . . . . . . . . . . . . . . . . . . . . . . 35
3.5.10 Choice of security test tool . . . . . . . . . . . . . . . . . . . . . . 35
3.5.11 Choice of monitoring tools . . . . . . . . . . . . . . . . . . . . . . 36
3.5.12 Choice of visualization tool . . . . . . . . . . . . . . . . . . . . . 37
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Implementation 38
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Develop the reference application . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Setup the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3.1 Setup the Kubernetes cluster . . . . . . . . . . . . . . . . . . . . . 39
4.3.2 Setup the load balancer . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3.3 Setup the NFS client . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.4 Tools deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.5 Install Istio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Contents viii

4.4 building the Continuous integration pipeline . . . . . . . . . . . . . . . . . 42


4.4.1 Gitlab Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.2 Docker image creation . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4.3 CI pipeline steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.5 Prepare the multi-stage delivery workflows . . . . . . . . . . . . . . . . . 45
4.5.1 Create Keptn project . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5.2 Integrate the performance test script . . . . . . . . . . . . . . . . . 46
4.5.3 Create application services . . . . . . . . . . . . . . . . . . . . . 47
4.5.4 Setup Prometheus monitoring . . . . . . . . . . . . . . . . . . . . 48
4.5.5 Setup the qualite gate . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.6 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.6.1 Snyk Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.6.2 Setup Prometheus . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.6.3 Grafana Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

General conclusion 53
List of Figures

1.1 Altersis Performance Logo . . . . . . . . . . . . . . . . . . . . . . . . . . 3


1.2 Global organisation of ALTERSIS Holding [1] . . . . . . . . . . . . . . . 6
1.3 Agile Vs Waterfall [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Scrum process [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1 Global use case diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 13


2.2 Use case diagram "Manage the configuration" . . . . . . . . . . . . . . . . 14
2.3 Use case diagram "Manage CI" . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Use case diagram "Manage CD" . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 Use case diagram "Monitor the system" . . . . . . . . . . . . . . . . . . . 19

3.1 DevOps workflow [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


3.2 CI/CD concept [5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Multi-Stage delivery [5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 VM architecture [6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5 Kubernetes architecture [7] . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6 Solution architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.7 Projects on Gitlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.8 CI pipeline on Gitlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.9 Docker [8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.1 Employee manager application . . . . . . . . . . . . . . . . . . . . . . . . 38


4.2 Cluster architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Nodes status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4 LoadBalancer pods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5 NFS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.6 Keptn pods status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7 Istio ingress-gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.8 Gitlab runner architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.9 Gitlab runner web interface . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.10 Gitlab variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.11 Frontend Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.12 Backend Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

ix
List of Figures x

4.13 CI pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.14 Security tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.15 Keptn project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.16 GUI Jmeter interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.17 E-mail sent by jmeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.18 Application services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.19 Quality-Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.20 Staging tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.21 Snyk vulnerabilities dashboard . . . . . . . . . . . . . . . . . . . . . . . . 50
4.22 front container vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.23 Prometheus metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.24 Grafana dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Acronym

• CD: Continuous Deployment

• CI: Continuous Integration

• CLI: Command Line Interface

• CM: Continuous Monitoring

• ENISo: Ecole Nationale d’ingenieurs de Sousse

• HTTP: HyperText Transfer Protocol

• HTTPS: HyperText Transfer Protocol Secure

• JSON: JavaScript Object Notation

• K8s: Kubernetes

• NFS: Network File Sharing

• SSH: Secure Shell

• SLI: Service Level Indicator

• SLO: Service Level Objective

• URL: Uniform Resource Locator

• VM: Virtual Machine

• Yaml: Yet Another Markup Language

xi
General Introduction

Separate teams write, test, and deploy code, then maintain it throughout its lifecycle,
according to the traditional conception of how software development or a digital program
works. Because one team’s concerns differ from those of the others, involving so many
teams might lead to conflict. When developers desire to innovate and develop applications,
the production team prioritizes maintaining the computer system’s stability. Furthermore,
everyone has their own procedures and uses tools that rarely communicate with one another.
As a result, the relationships between these teams can lead to conflicts, which causes
delays in delivery and additional costs to the business. Faced with these challenges, companies
must always treat their customers as a core value to be successful. Therefore, they must offer
quality and accessible customer support at all times, retain their customers and encourage
others.
To meet these goals, DevOps, a new approach that unifies the development and production
processes, has evolved. The goal of DevOps is to break down these boundaries and get
everyone on the same team from the outset. It’s about a method focused on the operational
and production parts working together.
By aligning all of the information system’s teams around a shared goal, it’s feasible to
eliminate disputes between these numerous players, avoid communication delays, and improve
delivery times. Combining technology, process, and people is the goal of this alignment. It
encompasses the entire value creation, deployment, operation, and support lifecycle.
It is in this context that my end of studies project fits. In particular, we aimed to simplify,
facilitate and automate the integration and deployment of applications. This involves setting up
a DevOps platform including an integration pipeline and continuous deployment accompanied
by a monitoring panel system.
This report is spread over four chapters that detail the progress of the project organized as
follows.

• The first chapter "General project context": will be dedicated to present the host
organization, to identify the problematic, to study existing solution and reveal the
adopted work methodology.

• The second chapter "Requirements specification" describes in detail the various


functional and non-functional needs to which, our system responds.

• The third chapter "Theoretical and conceptual study" will be dedicated to explain

1
List of Figures 2

basic concepts and specify the physical architecture of the solution as well as its logic
workflow. Moreover during this chapter we will present the adopted technical choices..

• Finally, the fourth chapter "Implementation", will represent the realization of the
project. It will expose the implementation and the development of the work illustrated
with screenshots. report ends with a general conclusion which includes a summary of
our work and the envisaged prospects.
Chapter 1

General project Context

1.1 Introduction
This chapter presents an overview of the end-of-study internship, the host organization as
well as the existing solutions and finally the methodology adopted.

1.2 Presentation of the host organization

1.2.1 Presentation of Altersis Performance


ALTERSIS [1] Performance (former Scopteam, adhoc International, Valid’IT and Sword
Performance) is the leading IT Performance Consulting company in EMEA (Europe Middle
East and Africa). All of our consultants are IT Performance Specialists and we have conducted
more than 2000 projects for over 200 customers in the area of Performance Engineering,
New Generation Monitoring, Application Performance Management, Performance Audit
Optimization. We are part of the ALTERSIS Group and have offices in France, Switzerland
and North Africa (Tunisia and Morocco)..

Figure 1.1: Altersis Performance Logo

1.2.2 Sectors of activity


With more than 300 test projects carried out for its customers each year, ALTERSIS
provide valued skills at all levels: methodology, expertise in the tools used, understanding of
technical environments and information system architectures.
• Performance Engineering: Their mission to make things work with value-added tech-
nologies, 3rd party software and the Performance Studio. In fact, they design and

3
Chapter 1. General project Context 4

integrate the adapted solutions while leveraging existing customer assets, open-source
and value-added technologies for building, testing and managing business critical
applications.

• Contiguous Quality assurance and DevOps: They understand Quality Assurance as the
practice of proactively assuring functional and non-functional requirements across the
entire application life cycle. This is based on dedicated roles, workflows, tools and
artefacts for Architecture QA, code QA and testing.
Continuous Quality Assurance is an optimized implementation of QA which maximizes
coverage and efficiency based on re-usability and automation. Optimized by the
implementation of continuous integration, code QA, unit-, functional-, security- and
Performance- Testing which is supported by APM technologies and correctly designed
workflows and artifacts covering the whole application life cycle.

• Next Generation APM, Monitoring Ops: In today’s dynamic environments, the re-
design of IT Operations Management around Application-, Cloud,- and Mobile Man-
agement with a key position of APM and of Performance Engineering into more and
more agile projects require dedicated leadership, organization and process structures.
Furthermore IT Management and executives are facing the challenge of leveraging
and rationalizing the investments and reducing the complexity in APM and related
disciplines such as Testing, Monitoring and Service Management.

• Training services: Altersis Performance’s education services courses are specifically


designed to transfer Their expertise directly to customers staff, allowing their IT depart-
ment to become best-in-class in performance engineering. With a well-trained team,
The Client organization will be able to minimize application downtime and maximize
the IT department’s resources.

1.2.3 Subsidiaries
Its aim is to develop, preserve and maintain a sense of belonging and team spirit. Cohesion
creates a strong internal dynamic between employees through social events: After-work
get-togethers, skiing trips, summer excursions, branch meetings, etc. We have a range of
multi-specialists with three main components covering the needs of major companies and
meeting our customers’ requirements.
• High technology A wide diversity of engineering sciences is represented within its
group. Scientific and electronic calculations, research and development, embedded
software, etc., so many fields where its multi-specialists are ready to assist.

• Information systems Developing turnkey solutions is its core business. From assistance
to writing your specification and the deployment of your final application, it can assist
at each stage of your project, whichever platform you choose (SaaS, web site, mobile
application, heavy client, etc.).
Chapter 1. General project Context 5

• Production IT Maintaining in operational condition, business recovery plan, standardis-


ation, automation and security, setting up procedures, user support, etc. - so many areas
in which it can be an effective partner.

1.2.4 Altersis Perfomance Values


Since its creation, ALTERSIS has seen remarkable growth due to the quality of its services
and its employees, its ethics and its values, but also the number of engineers that have
wanted to join us. Today, it is still our ambition to carry on growing our human resources by
continuing to recruit. The special nature of the ALTERSIS Group is based on sharing basic
human values, inspired by the moral code for Judo: Respect, Pride, Honesty, Cohesion, and
Commitment. So the value placed on People is at the heart of its growth.

• RESPECT: This comes from an open dialogue with employees on the choice of as-
signments, the geographical area where the business is active and the desired career
orientation, with supporting professional training.
For us, respect also comes from being close and monitoring our employees:

– Induction programme
– Quarterly assignment monitoring interviews
– Professional and career interviews
– This monitoring never stops developing; its aim is to give our company a continu-
ous improvement dynamic

• HONESTY: This is, above all, transparency in the way they operate, from making
contact with applicants through to monitoring our employees, including internal organi-
sation and projects in tune with the life of our company.

• COMMITMENT: Being committed is reflected in the strength of their desire, both in


their words and in their actions.

• PRIDE: We are proud to provide our employees and job applicants recognised human
resources management with a more human face. We are proud of our employees and
the satisfaction that they give customers through their talent and know-how.

The Figure 1.2 summarizes the oragnisation of the different component of ALTERSIS Hold-
ing.
Chapter 1. General project Context 6

Figure 1.2: Global organisation of ALTERSIS Holding [1]

1.3 Project presentation

1.3.1 Project context


Our goal is to learn more about how software development teams employ automated
integration and deployment pipelines in the context of DevOps approaches. Automated
pipelines are widely employed in modern software development, and they allow for the faster
completion of complex installation and testing activities by reducing the amount of manual
labor required.

1.3.2 Problem statement


Putting an application into production is usually the last stage of a well-planned process
involving several teams, including the development team and the testing team. As a result,
the three stages of development, testing, and production are regarded separate. Because
the interests of each team are different, involving so many teams might lead to conflicts.
When developers innovate and construct apps, the production team prioritizes maintaining
the computer science system’s reliability. Furthermore, each follows its own procedures and
employs tools that are rarely communicative. As a result, the ties between these groups may
be strained. These conflicts result in delivery delays and other costs that the organization had
not budgeted for. Customer satisfaction may suffer as a result. In order to prevent all of the
aforementioned issues, a new approach that unifies the development and production processes
is required.
Chapter 1. General project Context 7

1.4 Analysis of the existing solution


To manage the life cycle of each project before its deployment, development team had
a very traditional view of how software development works. Indeed, as soon as the prod-
uct owner defines the requirements and the desired functionalities in the application, the
developers start writing the code, then they focus on writing the unit tests.After successfully
completing the unit test execution step, this team is required to perform code quality checks
each time using tools. These controls make it possible to provide an executable jar linked
to a well-determined module which will be subsequently used as an entry in the following
steps. At this time, the role of tester arrives to validate application modules through applied
integration tests on each module.
The analysis of the existing solution revealed the following shortcomings:

• Delay in the process: given that on the one hand the developer will suffer a lot of
pressure from business people, project managers, etc. On the other hand, the tester is
in a position where he has to make sure that the software does not break and that he
has enough time to test it.Thus, they will naturally introduce a delay in this process. It
may take days for this process to complete see a few more weeks, a time that can be
drastically reduced if good practices are applied.

• Lack of deployment strategy: despite the fact that many modules are ready to be
delivered, so far there is no deployment strategy of these.

• Lack of general visibility of the state of the system: currently, there is no means
to ensure that the various systems involved in the development and deployment of
applications are functional.

1.5 Proposed Solution


The overall goal of our project is to create a pipeline that enables and automates the
process of continuous integration and continuous deployment of an application, comprising
operations such as testing, deployment, and dynamic resource control using several DevOps
technologies.
First and foremost, we want everyone to interact in an interdisciplinary fashion. Second,
we believe in the consistent and timely delivery. A Continuous Integration (CI) chain, mostly
made of developer and tester jobs, is established in the first stage of the process. In the second
part of the process, a chain of Continuous Deployment (CD) is placed allowing the passage to
production.

1.6 Work methodology


Chapter 1. General project Context 8

1.6.1 Agile methodology


Project management in web development and IT is a particularly important issue because
it influences both, the constitution of teams and the way in which the project will be managed.
We consider two methodologies Waterfall and Agile.
The Waterfall methodology is considered to be traditional. It works, as its name suggests, on
a cascade system which means that one sequence is developed after another until the final
product is produced and delivered to the customer. On the other hand, Agile methodology can
be defined as new approach based on an adapted and flexible iterative development, where
requirements and solutions evolve throughout the project.
Agile methodology uses best engineering practices, allowing the fast delivery of a high quality
product and reduce the error rate since the needs of the client represent the center of the
project’s priorities [9].
The values of Agile are:

• Accept the changes.

• Collaborate with the client.

• Focus on operational functionalities.

• Focus on the interactions of individuals.

The Figure 1.3 shows the differences between the both methodologies.

Figure 1.3: Agile Vs Waterfall [2]

1.6.2 Scrum methodology


The Scrum method is a sub-category of Agile methodology. This is the most used approach
for software development according to the annual report produced by "VersionOne" which
says 56 per cent of agile teams use Scrum [10]. Indeed, it is used to develop complex software
products based on iterative and incremental processes.
Chapter 1. General project Context 9

Therefore, we have opted for the Scrum approach in order to ensure rapid delivery of high
quality products and flexibility in changing environments.
The Scrum process defines roles (ScrumMaster, ProductOwner and devTeam) and dictates
the reiteration of time-limited production sprints at the end of which, functional software
increments are delivered. It also puts artefacts (Product Backlog, Sprint Backlog) as well as
events (Sprint Planning, daily meeting, Sprint Review and Sprint Retrospective).
The Figure 1.4 illustrates the Scrum development flow process.

Figure 1.4: Scrum process [3]

A Scrum method generally brings together three main players:

• Product Owner: he represents the customer.

• Scrum Master: person (s) responsible for management and the management team.

• Team: made up of developers, engineers, testers, designers, etc...

1.7 Conclusion
During this first chapter, we presented the general context of the project.
First, we have started by presenting the host organization. Second, we exposed the internship
context. Then, we developed a study of the existing solution to be able to define the proposed
solution. The last part was devoted to the choice of the methodology for managing our project.
In the next chapter, we will discuss the requirements specification part.
Chapter 2

Requirements specification

2.1 Introduction
This chapter is mainly dedicated to analyzing the requirements of our project. During
this chapter, we participated in the various meetings organized within the project team that
allowed us to capture functional and non-functional requirements. We also identified the use
cases to develop the overall use case diagram.

2.2 System’s Actors


In this section, we are going to identify the main actors of the system and describe the
functional and non-functional requirements.
The actor specifies a role played by a user or any other system that has interaction with
the considered system. As far as our system is concerned, we have identified four players.

• Developer: he is a user of GIT who can depose or retrieve a code on the depot. He is
the first actor to trigger the pipeline.

• Project manager: this is the user concerned by the monitoring. He has a table of
supervision which allows him to monitor the state of the application and the cluster .

• Tester: this is an actor who intervenes in the software testing phase. He’s in charge of
writing the performance test script.

• System Admin: he intervenes in the system configuration phase and ensures the instal-
lation of the pipeline.

Our system also includes some components as actors:

• Gitlab: it is a code version manager that allows to store the code and to expose it to
different developers.

• Kubernetes: it is a container orchestration engine.

10
Chapter 2. Requirements specification 11

• GitLab CI: a CI tool integrated with GitLab.

• Keptn: Automation tools for SLO-driven Multi-stage Delivery

• Prometheus: Monitoring tool

2.3 Functional requirements


The functional analysis consists of describing the functionalities offered by the product to
meet user needs, we present the features that impact the core of our system such as:

• Automate the triggering of a build on Gitlab CI following a code deposit on Git.

• Collect artifacts generated after a successful build.

• Automatically applying a unit test in the generated artifact.

• Automate the triggering of Keptn to deploy the application in Staging environment in


the Cluster.

• Automatically run the performance test script in the Staging environment

• Jmeter send mail to the manager and the developer in case of failure test

• Prometheus collect metrics in the min-time when the performance test script is running

• Keptn compare the results with the previous release and make decision to deploy to the
production environment

• The tester can check the correct functioning of the application by accessing it via the
URL of Staging environment.

• The user can use the application by accessing it via the URL of Production environment

• The project manager supervises the pipeline:

– The user can check the correct functioning of the application by accessing it via
the URL of Production environment.
– The system enables the Project Manager to monitor CPU usage, RAM and in/out
pressure. on the network by pods.
– The project manager could know the state of the various applications existing on
the kubernetes cluster (container creation, execution, waiting, etc.).
Chapter 2. Requirements specification 12

2.4 Non-functional requirements


After identifying the functional requirements, we present as a set of. the constraints under
which our system must operate. Those constraints allow to provide the product satisfies user
requirements while avoiding system inconsistencies. The non-functional project requirements
may be summarized primarily as follows:

• Velocity: the response time of the system, that is to say the pipeline and the distribution
panel, must be minimal. When you click on the URL, the monitoring panel must be
displayed instantaneously as well as all SonarQube applications and software.

• Reliability: the system must ensure operational performance. Indeed, our CI/CD must
be able to meet the requirements of users in a comprehensive manner while ensuring all
orchestration process.

• Maintainability: the source code must be commented and well documented to facilitate
updates.

• Scalability: the system must support the evolution and scalability of its components to
be able to add as many nodes to the Cluster as needed.

• Authentication: access to Gitlab and the cluster must be secured through authentica-
tion.

• Security: mechanisms of securities are used in the configuraion to ensure security.

2.5 System of Reference


The General Use Case Diagram provides a semi-formal overview of the key features the
system needs to address. Indeed, we identify four subsystems for our project, the first being
the configuration, the second is the CI, the third is CD, and finally the last is monitoring (see
the Figure 2.1).
Chapter 2. Requirements specification 13

Figure 2.1: Global use case diagram

Through these four subsystems:


• The developer has the possibility to act on the system in several ways. Indeed, he can:

– Submit or get a code from GIT.


– Follow a build on Gitlab CI.
– Receive e-mail in case of failure test or pipline
– Access the URL of the deployed application (Staging or Production environment.

• The project manager takes over the features offered to the developer. In addition, he
can:

– Monitor the state of the cluster, namely RAM, CPU and I/O pressure network on
pods.
– Monitor the status of application, that is to say, list existing pods and containers,
and specify which are functional and which are not.

• Testers can access the application through a URL exposed by the ingress gateway.

• The System Admin can create and set up a kubernetes cluster as well as a the Load
balancer
The functionalities offered by the rest of the actors will be detailed in the following sections.

2.6 Needs Interpretation

2.6.1 Configuration subsystem


The configuration component consists of setting up the CI/CD pipeline. The documenta-
tion of all this part makes it possible to propose cases of practical use when setting up this
pipeline. The Figure 2.2 describes in details the use case "Manage configuration".
Chapter 2. Requirements specification 14

Figure 2.2: Use case diagram "Manage the configuration"

The Table 2.1 is about the details mechanism of "Configure the cluster" use case.

Use case Configure the cluster.


Actor System Admin.
Description This use case describes the needed steps to configure the cluster.
Precondition A VM is created.
Nominal scenario
• The System Admin creates the cluster.

• The System Admin set up the configuration to access the cluster.

• The system admin manage cluster resources.

• The system admin setup the LoadBalancer.

• The system admin setup the NFS server.

Table 2.1: Textual description of the "Configure the cluster" use case

The Table 2.2 explains in details the use case "Setup CI/CD pipeline".
Chapter 2. Requirements specification 15

Use case Setup CI/CD pipeline.


Actor System admin.
Description This use case describes the needed steps leading to the creation of CI/CD
pipeline.
Precondition A kubenetes cluster is created and configured.
Nominal scenario
• The System admin installs Gilab-runner with docker on local
machine.

• the System admin configure Configure Gitlab to run the CI pipeline


on the gitlab-runner.

• The system admin set up Keptn .

• the System admin create the too environment Staging and produc-
tion

Table 2.2: Textual description of the "Set up CI/CD pipeline " use case

The Table 2.3 explains in details the use case "Set up the monitoring system".

Use case Set up Monitoring system


Actor System admin.
Description This use case describes the needed steps leading to the setup monitoring
system.
Precondition A kubenetes cluster is created and configured.
Nominal scenario
• Deploy the prometheus in Staging envirement

• Deploy Grafan dahsboard

Table 2.3: Textual description of the "Set up Moniotoring system" use case

2.6.2 Continuous integration subsystem


The CI component, translates into a pipeline that begins when a developer. exposes
its code on the Git version management server and ends when the artifacts of the code are
deployed.The "CI" use case diagram presented in the Figure 2.3 explains the functionality
required in this part of the job.
Chapter 2. Requirements specification 16

Figure 2.3: Use case diagram "Manage CI"

The Table 2.4 explains the "Push code" use case.

Use case Push code.


Actor Developer.
Description This use case describes the procedure of deposing the code on the Git
server.
Precondition A http or ssh connection established between the developer workstation
and the Git server
Nominal scenario
• The developer performs a "push" in order to deposit the code on
the Git server.

• The code is exposed on the Git server.

Alternative scenario
• The Git server detects a conflict.

• Resolve the conflict manually or automatically.

• Return to point 1 of the nominal scenario.

Exceptional scenario The ssh or http connection between the developer workstation and the
Git server is not established, the code cannot be deposited on the server.

Table 2.4: Textual description of the "Submit code" use case


Chapter 2. Requirements specification 17

The Table 2.5 describes in details the "Trigger build" use case.

Use case Trigger build.


Description This use case describes the Gitlab work when a build is triggered.
Precondition Dockerfile created.
Actor Developer, Gitlab CI.
Nominal scenario
• Gitlab CI checks the code change at git level after a commit
changes.

• Gitlab CI detects a change and triggers a build.

• Gitlab CI passes the build stage and creates an artifact.

Exceptional scenario Gitlab CI does not detect any code changes in step 2 of the nominal
scenario. The nominal scenario resumes at point 2 after another commit
defined by the developer.

Table 2.5: Textual description of the "Trigger build" use case

The Table 2.6 describes the "Get code" use case.

Use case Get code.


Description It describes the procedure of getting the code from the Git server.
Precondition A http or ssh connection established between the developer workstation
and the Git server.
Actor Developer.
Nominal scenario
• The developer presses the "download artifact" button in order to
have the code in the form of an executable jar on his machine.

• The code is retrieved in the form of an executable jar on his


machine.

Alternative scenario The developer launches the "pull" command manually.


Exceptional scenario
• The "build" stage was not successful, the developer must check
the code and then submit it again.

• The ssh or http connection between the post user and the Git server
is not established, the code cannot be retrieved from the server.

Table 2.6: Textual description of the "Get code" use case


Chapter 2. Requirements specification 18

2.6.3 Continuous deployment subsystem


The use case diagram of the deployment of the application is explained by the figure 2.4.

Figure 2.4: Use case diagram "Manage CD"

The Table 2.7 presents a description of the "Trigger the application deploy" use case.

Use Case Trigger the application deployment.


Description This use case describes needed steps to deploy an application.
Precondition The build stage is successful and a Docker image is created.
Actor System Admin, Keptn, Prometheus.
Nominal scenario
• The last Job in Gitlab CI pipline trigger Keptn to Start the De-
ployement process in the Staging environment with Helm.

• Prometheus Collect metrics at the same time when the performance


test is running triggered by the Deployment-Service

• Keptn evaluate the the new release with the metrics collect and
give the order to start deployment to the production or not.

• After deploying the application to Production, Keptn expose the


URL of the application in both environment (Staging, Production)

Exceptional scenario The new release is slower then the provious the application does’nt
deploy to the production

Table 2.7: Textual description of the "Trigger the application deploy" use case
Chapter 2. Requirements specification 19

2.6.4 Monitoring subsystem


The supervision module consists of two parts: monitoring the cluster state and monitoring
the containers. The Figure 2.5 shows the use case diagram of the "monitoring" module.

Figure 2.5: Use case diagram "Monitor the system"

The Table 2.9 is meant to describe in details the "Monitor the cluster state" use case.

Use Case Monitor the cluster state.


Description This use case describes the needed steps to supervise the cluster.
Actor Project manager.
Precondition A connection to the cluster is established.
Nominal scenario
• The project manager chooses the project he wishes to consult.

• The project manger creates index pattern to select data concerning


cluster state.

• The system displays graphic charts indicating the pressure of the


network inputs / outputs, RAM usage and CPU usage by the pods.

Table 2.8: Textual description of the "Monitor cluster state" use case
Chapter 2. Requirements specification 20

The Table 2.9 contains a detailed explanation of the "Monitor containers state" use case.

Use Case Monitor containers state.


Description This use case describes the needed steps to supervise containers.
Actor Project manager.
Precondition An application is deployed.
Nominal scenario
• The project manager chooses the project he wishes to consult.

• The project manger creates index pattern to select data concerning


container state.

• The system displays graphic charts indicating logs about the con-
tainer/application state.

Table 2.9: Textual description of the "Monitor container state" use case

2.7 Conclusion
After the phase of analysis, we talked about the functional and non functional require-
ments of our project. Then, we modeled our requirements through a general use case
diagram.Moreover, we presented detailed use case diagrams related to every subsystem.
All work has been developed to facilitate the transition to the design phase of our project
which will be discussed in the following chapter.
Chapter 3

Architecture and Technologies

3.1 Introduction
The design stage forms the basis for the next phase of project implementation, as it
describes the architecture of each component. In the following lines, there will be a more
in-depth analysis of the project by giving a more detailed description.
This chapter will start by presenting the architecture of each component of the project, and
then it will explain in more detail the development environment used when building this
pipeline.

3.2 Global concepts


During this section, we emphasis on the concept of DevOps and its principles.

3.2.1 DevOps
DevOps[11] is a set of philosophies, practices, and tools that help an organisation to deliver
better and products faster by facilitating the integration of the development and operations
functions. The word DevOps is a combination of the terms development and operations,
meant to represent a collaborative or shared approach to the tasks performed by a company’s
application development and IT operations teams.
The concept of DevOps emerged out of a discussion between Andrew Clay and Patrick
Debois in 2008 [12]. They were concerned about the drawbacks of Agile and wanted to come
up with a solution. DevOps is associated with a series of principles.
The Figure 3.1 lists DevOps principles.

21
Chapter 3. Architecture and Technologies 22

Figure 3.1: DevOps workflow [4]

DevOps[13] requires a delivery cycle that comprises planning, development, testing,


deployment, release, and monitoring with active cooperation between different members of a
team.

3.2.2 Automation
The main idea about DevOps is to automate tasks as much as possible. The automation[14]
includes development, testing, configuration and deployment operations. It does not allow
human errors. If it is implemented correctly provides gaining in reliability. Moreover, there is
a gain also in agility and repeatability. Developers and operations teams can concentrate on
other processes, such as adding value and quality to the product.

3.2.3 Continuous integration


Definition

Continuous integration (CI)[14] is defined as a set of practices used to ensure that each
modification in the source code does not produce problems of regression in the application
under development.
It is to say that no defect has been introduced to the part of the system that has not been
changed [15].
The main goal of this approach is to anticipate and quickly identify bugs before the
software goes into production.
This allows for a more complete view of the software, especially on the various weak
points and strong points of the code or of the team. This allows to gain in reactivity to face
the various problems which may be present in the various phases of the project.
To benefit from the advantages of the concept of continuous integration, it is necessary to
follow some rules of good practice such as maintaining a single deposit for the source code
viewed, perform several commits per day and per developer, automating code compilation,
and keeping compilation short.
Chapter 3. Architecture and Technologies 23

Functions

This method is based on the implementation of a software brick that will allow the
automation of several tasks:

• Automatic compilation of the code and its latest modifications.

• Unit and functional tests.

• Product validation based on several criteria.

• Performance tests in order to carry out certain optimizations.

Throughout the evolution of the project, this brick will perform a set of tasks and tests. The
results produced can be viewed by the team of developers, to understand which is problematic
in the latest code changes. This CI[16] method also makes it possible not to forget certain
elements when putting into production and therefore to improve the quality of the application.

3.2.4 Continuous delivery and Continuous deployment


Definition

Continuous delivery (CD)[14] is a set of practices that transforms the software life-cycle.It
may be roughly summarized by the following expression "all commits must create a new
version ”. Thus, each modification made by a developer is integrated in a new version of the
software. Continuous deployment (CD)[17] extends this principle to effective deployment in
production of the new version created. In summary, each commit is directly pushed to the
production [18].
The Figure 3.2 illustrates the pipeline of Continuous Integration, Delivery and Deploy-
ment.

Figure 3.2: CI/CD concept [5]


Chapter 3. Architecture and Technologies 24

3.2.5 Multi-Stage Delivery


3.2.5.1 Definition

Multi-stage[19] continuous integration is a software development technique intended


to achieve highly integrated parallel development activity. The Figure 3.3 illustrates the
Multi-Stage delivery workflow.

Figure 3.3: Multi-Stage delivery [5]

3.2.6 Continuous monitoring


Definition

Fundamentally, Continuous Monitoring(CM), sometimes called Continuous Control Mon-


itoring (CCM), is an automated process by which DevOps team can observe and detect
compliance issues and security threats during each phase of the DevOps pipeline. Outside
DevOps, the process may be expanded to do the same for any segment of the IT infrastructure
in question. It helps teams or organizations to monitor, detect, study key relevant metrics, and
find ways to resolve said issues in real-time [20]. CM comes in at the end of the DevOps
pipeline. Once the software is released into production, CM will notify Dev and QA teams in
the event of specific issues arising in the prod environment. It provides feedback on what is
going wrong, which allows the relevant people to work on necessary fixes as soon as possible.

Types of CM

These are the main types of Continous Monitoring:


Chapter 3. Architecture and Technologies 25

• Infrastructure Monitoring: monitors and manages the IT infrastructure required to


deliver products and services. This includes data centers, networks, hardware, software,
servers, and storage. Infrastructure Monitoring collates and examines data from the IT
ecosystem to improve product performance as far as possible.

• Application Monitoring: monitors the performance of released software based on


metrics like up-time, transaction time and volume, system responses, API responses,
and general stability of the back-end and front-end.

• Network Monitoring: monitors and tracks network activity, including the status and
functioning of firewalls, routers, switches, servers, Virtual Machines, etc. Network
Monitoring detects possible and present issues and alerts the relevant personnel. Its
primary goal is to prevent network downtime and crashes.

Logs

Logs are a specific type of messages generated by the system to inform the user when a
event has happened. In a log message is the log data. Each event in a system is going to have
different sets of data in the message.

Metrics

Raw data can be acquired from a variety of sources to create metrics. Hardware, sensors,
apps, and websites are examples of these sources. These sources can generate data such as
resource utilization, performance, or user behavior. This can be operating system data or
higher-level data types relating to a specific feature or component operation. Metrics are
typically gathered on a regular basis, such as once per second, once per minute, or any other
time interval, depending on the features of the indicators and the metrics’ tracking purposes.
Metrics can be used to track progress, identify significant events, and predict potential lapses.

3.3 Physical architecture


To meet the requirements previously expressed, we propose a physical architecture based
on a virtual machine on our server on which we will be creating our cluster [21].

3.3.1 Virtual machine


Server virtualization makes it possible to operate and run multiple machines virtual
machines on a single machine as if they are running on physical machines distinct. One of the
most recognized strengths in this category of virtualization is that in the event of total loss of
a shelter or in the event of an exceptional event there will not be any dramatic impact on the
activity of the machines. The Figure 3.4 shows the architecture of the Virtual Machine(VM).
Chapter 3. Architecture and Technologies 26

Figure 3.4: VM architecture [6]

3.3.2 Kubernetes architecture


Kubernetes[22] architectures are based five main concepts: .

• Pods: these are the most basic objects that can be managed in Kubernetes. Each pod
can consist of one or more containers.

• Kubernetes nodes: on which the pods run. There can be machines physical or virtual.

• The cluster: is a set of nodes running an application. It must imperatively have a Master.

• The master: it has the role of admin or the system which controls and commands the
machines in the cluster.

• The kubelet: it checks and ensures that for each node, all the containers are running.

The Figure 3.5 shows a global overview about the Kubernetes objects:
Chapter 3. Architecture and Technologies 27

Figure 3.5: Kubernetes architecture [7]

Other abstractions exist in Kubernetes related to the creation and management of pods
which are:

• Replica sets and replication controllers: can create and destroy pods dynamically in
order to ensure continuity of service by ensuring that the defined number of pods
(replicas) is running at all times. If some pods fail or are terminated, the replica set or
the replication controller will replace them automatically.

• A service: acts as an endpoint for a set of pods by exposing a stable IP address to the
outside world. This helps to hide the complexity of the planning dynamics of pods
within a cluster since pods can be created and destroyed constantly.

• Deployment: is more recently introduced in Kubernetes to manage replica sets. He


involves describing the state of the desired system and the deployment is responsible
for orchestrating the pods for ensure that the state described is the same deployed at all
times.

3.4 Architecture proposed


One of the constraints of the existing architecture is the provision of mechanisms for
CI/CD. As part of our architecture, we have implemented our CI/CD stream as explained in
Chapter 3. Architecture and Technologies 28

the Figure 3.6. The flow can be explained through the following steps:

1. The flow begins with a commit of the source code to the version control system.

2. The version control system notifies the CI/CD servers to inform them of changes
performed in the code repository.

3. The CI server (Gitlab_CI) build the artifact

4. The code compilation and testing is performed on this artifact.

5. The application is put through a security tests.

6. The CI server (Gitlab_CI) build a new docker image and push to the docker registry

7. Gilab_CI trigger Keptn to Start the deployment process

8. Keptn deploy the application the Staging environment

9. After the deployment to staging, Jmeter Perform performance tests to the application

10. In the same time when the test are running Prometheus collect metrics such as (request
response time, number of failures requests ...)

11. keptn compare the metrics with the previous release and

12. Finally, if every think is well working Keptn deploy the application to the production
environment
Chapter 3. Architecture and Technologies 29

Figure 3.6: Solution architecture


Chapter 3. Architecture and Technologies 30

3.5 Technologies and work environment

3.5.1 Choice of source code tool management


Software Configuration Management allows better visibility and code tracking source by
offering collaboration between developers working on the same project. GitLab is a free and
open source code management system written in Ruby based on Git. This tool offers several
features for monitoring project-related problems and code reviews [23].
It includes a wiki and it gives these users full control over their projects or repositories.
The Figure 3.7 illustrates projects on Gitlab.

Figure 3.7: Projects on Gitlab

3.5.2 Choice of model representation language


The two data structuring languages YAML and JSON are the most common. In fact, JSON
stands for JavaScript Object Notation which is generally used for client-server communication.
For the development of the scripts, we used the serialization language YAML, acronym of
"Yet Another Markup Language", because it gives a simpler approach to data representation
than JSON. In addition, it is more used for configuration files thanks to its readability and it
provides powerful configuration parameters, without having to learn a more complex type of
code like CSS, JavaScript and PHP.

3.5.3 Choice of CI tools


A tool that ensures continuous integration and continuous deployment is needed to ensure
the automation of the application life-cycle. The life-cycle of an application should be quick
and short. This allows teams to use these automation tools to CI which will succeed in
delivering the solutions as quickly as possible. The Table 3.1 presents a comparison between
different CI tools.
Chapter 3. Architecture and Technologies 31

Criteria Jenkins Circle CI Gitlab CI


Open source Yes No Yes
Pipeline Groovy Yaml Yaml
Monitoring No No Yes
Container Reg- No No Yes
istry
Cloud Native Yes Yes Yes

Table 3.1: Summary of CI/CD tools comparison

Following this comparative study, we adopt Gilab CI thanks to its provision of several
features than Jenkins and Circle CI. Indeed, Gitlab CI is an open source. In addition, the
free version of Gitlab CI is sufficient for our project. Then, Gitlab has its own registry which
solved the internal registry problem if we need to store our docker images. The Figure 3.8
shows a CI/CD pipeline on our Gitlab account.

Figure 3.8: CI pipeline on Gitlab

3.5.4 Choice of Continuous Delivery tool


Continuous delivery is a key part of cloud-native software development processes, as
it aims to develop, test, and release software with increased speed, frequency, and quality.
Delivery pipelines have been the tool of choice so far because there was no alternative.
However, even simple changes to these pipelines, such as replacing a test tool, can be a
herculean task.
In order to bring the no-pipeline approach to life, Keptn[24] appear, Keptn is an open-
source event-based control plane for cloud-native apps that allows for continuous delivery
and automated operations. It employs a declarative approach that allows developers to declare
DevOps automation flows such as delivery or operations automation without having to script
every aspect. Without the need to create distinct pipelines and scripts, this specification may
be shared across any number of micro services. Keptn distinguishes between the process
Chapter 3. Architecture and Technologies 32

specified by SREs and the actual tooling defined by DevOps engineers, as well as information
on the artifacts.
In contrast to typical pipelines, where everything is saved in a single file, these defini-
tions can be managed independently. This prevents employees from accidentally breaking
workflows, and because they’re managed in Git, you’ll have a complete history of changes.

3.5.5 Istio tool


Istio[25] is an open source implementation of a service mesh for Kubernetes. The strategy
used by Istio to integrate a network traffic proxy into a Kubernetes pod is achieved using a
sidecar container. It is a container running with the service container in the same pod. Since
they run in the same pod, the two containers share IP, lifecycle, resources, network, and
storage.
Istio uses the Envoy proxy as the network proxy inside the sidecar container and configures
the Pod to send all incoming/outgoing traffic through the Envoy proxy (sidecar container).
When using Istio, communication between services is not direct. However, on the sidecar
container (envoy proxy), when service A requests service B, the request is sent to service A’s
proxy container using its DNS name. Then, service A’s proxy container sends the request to
service B’s proxy container, which eventually invokes the real service B. The reverse path is
followed for the response

3.5.6 Docker tool


Docker is an open source tool designed to make it easier to manage applications by using
containers [26]. It provides an implementation that standardises the use of containers on
different platforms. The main components are:

• Docker Engine: it is the platform core. It is a daemon process, executed in background


on the host machine. Docker Engine provides access to all the functionalities and
services made available by Docker. A Docker container can be moved across different
machines (with Docker Engine installed) and it will work in the same way, even if the
machines are running two different OSes. This is a great achievement, as it is often the
case that when moving an application to different execution environments, something
gets broken.

• Docker Client: it communicates with Docker daemon (Docker Engine). It is not


necessary for Docker Client and the daemon to run on the same physical machine.
Docker Client can connect to a remote Docker daemon. Communication takes place
through a REST API, over UNIX sockets or a network interface.

• Docker Image and Container: a Docker Image is an immutable template that contains
a set of instructions for creating a Docker container. Then, a Docker Container is a
running instance of an image. The image is immutable, so it never changes and it is a
Chapter 3. Architecture and Technologies 33

strong advantage, since users always know what they are going to run, independently
from the environment. Generally, an image is identified by registry/user/nameImg: tag,
where latest is the default tag. Modularity is a feature of images, indeed an image can
be composed by many read-only layers, which are images too. When a container is
created from an image, it is added a new writable layer, called container layer. Even if
a container is stopped and restarted, it maintains changes within the filesystem. Thanks
to the background modularity, it can be possible that multiple images share the same N
base layers and this is an advantage in terms of memory usage.

• Docker Registry: Docker images are pulled and pushed from and to repositories.
Repositories lies into Registries. Every host has its own local registry. A user can create
its own remote registries. The official remote Docker registry is Docker Hub.

• Dockerfile: it is possible to build a new image from a Dockerfile. Dockerfile is a text


file that contains a set of instructions used to build an image.

The Figure 3.9 illustrates how docker works.

Figure 3.9: Docker [8]

3.5.7 Helm tool


Helm[27] is an open-source packaging tool that helps you install and manage the Kuber-
netes application lifecycle. Like Linux package managers such as APT and Yum, Helm is
used to manage Kubernetes graphs, which are preconfigured Kubernetes resource packages.
Helm allows us to create a framework for clearly defined micro services. It manages
scalability needs, facilitates the addition of nodes and pods to the Kubernetes cluster as needed.
Chapter 3. Architecture and Technologies 34

Instead of working with a holistic image and increasing resources, you only run a necessary
set of images and scale them independently.

3.5.8 Choice of the orchestration system


Applications are generally made up of several components that put in containers and which
must be organized at the network level so that the application can work properly. This process
of organizing containers is called orchestration of containers. There are a few platforms
such as Apache Mesos, Google Kubernetes and Docker Swarm which each offer their own
methodologies for container management [28]. These container orchestration engines allow
users to control when launching and stopping containers as well as their regrouping into
clusters and coordination of all the processes that make up an application. Orchestration
tools also allow guide the deployment of containers and automate updates, condition of
monitoring and failover procedures. In the following, we present the three orchestration
platforms: Apache Mesos, Google Kubernetes and Docker Swarm.

• Kubernetes(also known as “k8s”): is an open source project, published by Google.


First, it performed in June 2014 and is written in Go. It draws on their experience
large-scale container management. A number of other platforms taking Kubernetes
support, including Red Hat OpenShift and Microsoft Azure. Kubernetes offers the
planning containers on hosts and primarily supports Docker as the engine of containers.
In addition, Kubernetes offers other main features including automatic scaling, load
balancing, volume management and management of secrets. In addition, a web user
interface makes it easier to manage and troubleshoot the cluster. With these features
included, Kubernetes often requires less third-party software than Swarm or Mesos.

• Docker Swarm: is Docker’s native container orchestration engine. Initially released


in November 2015, it is also written in Go. Swarm is tightly integrated with the
API Docker, which makes it well suited for use with Docker. The same primitives
that apply to a single docker cluster are used with Swarm. This can simplify the
container infrastructure management, as there is no need to configure an engine separate
orchestration. Swarm does not yet support automatic sizing native. Scaling must be
done manually. In addition, Swarm includes balancing input load, but external load
balancing is done through from a third-party load balancer such as AWS ELB. It should
also be noted the absence of a web interface for Swarm.

• Mesos: Apache Mesos version 1.0 was released in July 2016, but it has been developed
in 2009 by doctoral students from UC Berkeley. Unlike Kubernetes and Docker Swarm,
Mesos is written in C ++ and differs in this sense in the management of center resources
data and cloud based on a distributed approach. This means that Mesos takes a more
modular approach to container management, allowing users to have more flexibility in
the types of applications.
Chapter 3. Architecture and Technologies 35

Based on this study, we chose to use Kubernetes as our orchestration system. Indeed, k8s
allows for companies to:

• No longer focus on implementation details but on how to operation of applications.

• Automate the deployment of containerized applications.

• Simplify the scaling of containerized applications. It automates the follow-up of the


request and resource usage and manage scaling.

• Deploy applications and new versions on a continuous basis and reducing maintenance
time. Indeed, its mechanisms are important for updating containers and for reverting to
earlier versions in the event of a problem.

• Support APIs to discover container services, storage and network.

• Run on multiple types of environments such as public clouds and hardware physical or
virtual.

3.5.9 Kubernetes manage tool


Helm is the package manager for Kubernetes. It helps to control Kubernetes using
navigation maps, called Charts. The primary role of Helm is to help define, install, and
update complex Kubernetes applications. Helm allows us to create a Framework for clearly
defined micro services. It manages scalability needs, makes it easy to add nodes and pods to
the Kubernetes cluster as needed. Instead of working with a holistic image and increasing
resources, we run only a necessary set of images and scale them independently [29]. Helm is
used to:

• Manage Complexity: charts describe even the most complex apps, provide repeatable
application installation, and serve as a single point of authority.

• Easy Updates: take the pain out of updates with in-place upgrades and custom hooks.

• Simple Sharing: charts are easy to version, share, and host on public or private servers.

• Rollbacks: use Helm rollback to roll back to an older version of a release with ease.

3.5.10 Choice of security test tool


Snyk [30] is a developer security platform. Integrating directly into development tools,
workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix
security vulnerabilities in code, dependencies, containers, and infrastructure as code. Sup-
ported by industry-leading application and security intelligence, Snyk puts security expertise
in any developer’s toolkit.

Snyk can be integrated in many stages:


Chapter 3. Architecture and Technologies 36

• Integrated in the IDE

• Connect to the repository

• Integrate in the CI/CD pipeline

3.5.11 Choice of monitoring tools


To make our business a success, we need an efficient surveillance system that covers all
aspects of our company and infrastructure, our servers, databases, services, traffic overall and
even collected revenue. There are multiple open source monitoring tools out there that can be
used but we will try to choose the best to fit our needs.

• Nagios[31] is an industry leader in IT infrastructure monitoring. It offers multiple


solutions to meet RD needs, addressing both business and technical challenges. Nagios
facilitates the high availability of applications by providing information about database
performance.

• Zabbix[32] is a software for monitoring numerous parameters of networks, servers, ap-


plications, virtual machines, and cloud services. It can collect metrics, detect problems,
visualize, notify, and send notifications. Zabbix has a web interface providing easy
interaction with all statistics, visualizations, and parameter settings. Zabbix doesn’t
store data itself, but it can use a broad range of databases. Zabbix’s backend is written
in C and the web frontend is in PHP.

• Prometheus is a metric collection tool for processing time series data. Many RD
organizations choose Prometheus as their main monitoring data source because it
is easily integrated into most software architectures, quickly integrates with most
modern technologies, and is easy to set up and maintain. Prometheus comes with a
built-in database for collecting time series data, a designated query language to take
advantage of the multidimensionality of the database, and service discovery capabilities
that help monitor new components and services after they are deployed as part of
the application stack . The Prometheus exporter allows data to be collected from
services that Prometheus cannot detect and automatically recognize, and Prometheus
Alertmanager pushes notifications about threshold violations to external collaboration
and on-call tools.

In the table 3.2 we will explore the Prometheus, Nagios and Zabbix benefits and limits.
Exceptional benefits are provided for each one of these solutions. This comparison is based
on several differentiating criterias from abilities prospect and the complexity of setup and
maintenance, visualizations and alerting capabilities and the community behind each solution
Chapter 3. Architecture and Technologies 37

Function Prometheus Nagios Zabbix


Strengths Powerful easy The standard Base metrics
to use monitor- from the box
ing
Alerts Yes Yes Yes
User Interface Grafana UI in- Infrastructure The UI is in
tegration dedicated UI PHP
Querring flexi- Beautiful limited limited
bility Model and
Query lan-
guage

Table 3.2: Comparative study on monitoring tools

As can be seen, Prometheus, Zabbix, and Nagios are all great monitoring tools, with
Prometheus being the most advanced. They, like all items, have their own set of advantages
and disadvantages. Prometheus offers a number of capabilities that make it a useful tool for
tracking data and creating graphs and alerts. As a result, it will be the winner of this combat.

3.5.12 Choice of visualization tool


Grafana[33] is a cross-platform web-based analytics and interactive visualization software.
It generates web-based charts, graphs, and alerts when connected to compatible data sources.
The data source in our case will be our Prometheus server.
Installing such a tool is rather straightforward. We’ll use the most recent stable version.
After obtaining and installing the appropriate programs. It’s time to start using Grafana and
create some cool dashboards. Grafana’s user interface can be accessed via its URL in any web
browser, and it defaults to listening on port 3000. Our information may be searched, viewed,
and analyzed. Grafana’s elegant and customizable dashboards allow us to design, review, and
visualize all of our infrastructure from a single location.

3.6 Conclusion
In this chapter, we have introduced some theoretical concepts that our project relies on.
Then, we exposed the physical and logical architecture of our solution. At the end of this
chapter, we justified our choices of technologies for the realization part.
Chapter 4

Implementation

4.1 Introduction
In this chapter, we will describe the steps aiming to create CI/CD channels and monitoring
system. Starting with the needed configurations. Finally, we will present the part which will
be illustrated by explanatory screenshots.

4.2 Develop the reference application


First of all, we need to prepare the reference application. The figure 4.1 shows the simple
application that allow the HR to add, update, and delete an employee from a company with
3-tiered architecture.

Figure 4.1: Employee manager application

38
Chapter 4. Implementation 39

4.3 Setup the Environment


There are few configuration steps that need to be done, in order to create the cluster and
manage its resources.

4.3.1 Setup the Kubernetes cluster


We must establish a cluster in which all tools and application pods will be executed, the
cluster is composed with too nodes, one master node and one worker node. The figures 4.2
and 4.3 show the cluster nodes architecture as well as the list of the cluster nodes, respectively.

Figure 4.2: Cluster architecture

Figure 4.3: Nodes status

4.3.2 Setup the load balancer


MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using
standard routing protocols.
Chapter 4. Implementation 40

If we take an example of cloud environment, the creation and allocation of external IP is


done by the responsibility of the cloud provider. In case of bare metal environment like in our
case. it’s the responsibility of MetalLB. MetalLB assign the IP from the reserved pool of IP
addresses which we allocate them via configma. Once the external IP is assigned it needs to
redirect traffic from external IP to the cluster.
The Figure 4.4 shows the list of the MetalLab pods in the cluster

Figure 4.4: LoadBalancer pods

4.3.3 Setup the NFS client


Requesting a persistence volume claim will automatically trigger the creation of a persis-
tence volume. For this to happen an NFS client must be installed in the kubernetes cluster,
and access must be given to the client using a service account.
The Figure 4.7 shows the architecture of the NFS server and the client Provisioner

Figure 4.5: NFS Architecture

4.3.4 Tools deployment


4.3.4.1 Keptn CLI

We need to install Keptn CLI in a docker image. This docker image will be used in the
last step of the integration pipeline to send the trigger to start the deployment process.
Chapter 4. Implementation 41

4.3.4.2 Keptn

Keptn must be installed in the Kubernetes cluster and all his component must be running.
The Figure 4.6 shows the keptn pods status on the cluster

Figure 4.6: Keptn pods status

4.3.5 Install Istio


We are using Istio for traffic routing and as an ingress to our cluster. In order to expose
our application in the too environments (staging and production) , our monitoring dashboards
we need an ingress gateways to expose all these traffics.
The Figure 4.7 shows the istio-ingress gateway pods running on the cluster

Figure 4.7: Istio ingress-gateway


Chapter 4. Implementation 42

4.4 building the Continuous integration pipeline


There are few configuration steps that need to be done, in order to setup the continous
integration pipeline.

4.4.1 Gitlab Configuration


4.4.1.1 Gitlab-runner

Gitlab runner (figures 4.9 and 4.8) is a build instance which is used to run the jobs over
multiple machines and send the results to GitLab and which can be placed on separate users,
servers and local machines. It could be registered as shared or specific after installing. In
my case, the pipeline is running in a docker container on my personal PC using a local
gitlab-runner. The Figure 4.8 shows main functionality of the gitlab runner.

Figure 4.8: Gitlab runner architecture

The Figure 4.9 shows the configuration of the gitlab runner in the gitlab interface

Figure 4.9: Gitlab runner web interface


Chapter 4. Implementation 43

4.4.1.2 Gitlab environment

CI/CD variables are a type of environment variable. They can be used to:

• Control the behavior of jobs and pipelines.

• Store values that will be re-used.

• Avoid hard-coding values in the .gitlab-ci.yml file.

The Figure 4.10 illustrates variables configured on Gitlab project.

Figure 4.10: Gitlab variables

4.4.2 Docker image creation


For each part of the application a Docker image is required to be build-in using a Dockerfile,
one for the front-end and another for the back-end. The Figure 4.12 presents the Dockerfile
for an angular application used for the realisation part.

Figure 4.11: Frontend Dockerfile

This Dockerfile contain of a multi-stage docker build, which is divided into the following
stages:
Chapter 4. Implementation 44

• Building the angular source code into production ready output.

• Serving the application using a NGINX web server.

The Figure 4.12 presents the Dockerfile for a SpringBoot application

Figure 4.12: Backend Dockerfile

4.4.3 CI pipeline steps


In this section, we will present the phase of creating .gitlab-ci.yml file in the root of the
project repository. This file contains the CI script. The pipline is basically composed of two
necessary elements:

• Jobs which define what to do.

• Stages which define when to run the jobs.

Gilab Run these steps in the right order:

• build the project

• run the unit tests

• run the security tests

• build the docker image for this project

• push the docker image to the docker registry (Dockerhub in our case)

• notify Keptn via the Keptn CLI to start the deployement process

The Figure 4.13 presents the different steps of the CI pipeline


Chapter 4. Implementation 45

Figure 4.13: CI pipeline

The Figure 4.14 presents the Snyk security tests in the pipeline

Figure 4.14: Security tests

4.5 Prepare the multi-stage delivery workflows


Keptn allows to declaratively define multi-stage delivery workflows by defining what
needs to be done. How to achieve this delivery workflow is then left to other components
and also here Keptn provides deployment services, which allow you to setup a multi-stage
delivery workflow without a single line of pipeline code.

4.5.1 Create Keptn project


To create a Keptn project, a shipyard file should be created while creating a Keptn project.
Chapter 4. Implementation 46

In the shipyard.yaml we define two stages called staging and production with a single
sequence called delivery. The staging stage defines a delivery sequence with a deployment,
test, evaluation and release task (along with some other properties) while the production stage
only includes a deployment and release task. The production stage also features a triggeredOn
properties which defines when the stage will be executed (in this case after the staging stage
has finished the delivery sequence). With this, Keptn sets up the environment and makes
sure, that tests are triggered after each deployment, and the tests are then evaluated by Keptn
quality gates. Keptn performs a direct deployment (i.e., two deployments simultaneously with
routing of traffic to only one deployment) and triggers a performance test in the staging stage.
Once the tests complete successfully, the deployment moves into the production stage using
another direct deployment. The Figure 4.15 presents the Ketn project with the different Stage

Figure 4.15: Keptn project

4.5.2 Integrate the performance test script


4.5.2.1 Create the performance script

The performance test script can be created with the Jmeter GUI interface and then exported
in the .JMX file.
The Figure 4.16 presents the Jmeter GUI interface
Chapter 4. Implementation 47

Figure 4.16: GUI Jmeter interface

4.5.2.2 Integrate the script with Keptn

The Jmeter script ken be integrated to keptn project as a service. The performance service
will be triggered when the application successfully deployed to the staging environment.
Keptn support Jmeter service. Jmeter will transmit the test results to the administrator via
email. The Figure 4.17 shows the result e-mail sent by the jmeter script

Figure 4.17: E-mail sent by jmeter

4.5.3 Create application services


It is necessary to construct a help chart for the target application in order to create a keptn
service. The Figure 4.18 shows the application services on Keptn
Chapter 4. Implementation 48

Figure 4.18: Application services

4.5.4 Setup Prometheus monitoring


After creating a project and services, we need to set up Prometheus and Prometheus Alert
manager

4.5.5 Setup the qualite gate


After deploing the application in this environment we perform a several test like perfor-
mance with jmeter . When these tests are running, the prometheus monitoring service collect
metrics(like request response time . . . ) and compare them with the previeus release, if every
thing is well working, the feature will get a green light for deployment to the production.
Keptn requires a performance specification for the quality gate. This specification is
described in a file called slo.yaml, which specifies a Service Level Objective (SLO) that
should be met by a service. The Figure 4.19 shows the quality-gates dashbord on the keptn
bridge
Chapter 4. Implementation 49

Figure 4.19: Quality-Gates

The Figure 4.20 present the tasks in the staging environment

Figure 4.20: Staging tasks


Chapter 4. Implementation 50

4.6 Monitoring

4.6.1 Snyk Dashboard


Snyk is a developer security platform for securing code, dependencies, containers, and
infrastructure as code. The snyk dashboard displays application vulnerabilities discovered in
docker images as well as vulnerabilities discovered using the CLI in the pipline. The Figure
4.21 show the vulnerabilities detected by the CLI or on the Docker registry

Figure 4.21: Snyk vulnerabilities dashboard

The Figure 4.22 show the vulnerabilities detected in the front-end docker image sorted by
their severity

Figure 4.22: front container vulnerabilities


Chapter 4. Implementation 51

4.6.2 Setup Prometheus


A Prometheus Exporter is a piece of software that can fetch statistics from another system
( Databases, hardware related, APIs, etc). Besides, it can turn this data into Prometheus
metrics and utilizing a client library. It also launches a web server that exposes a URL, which
displays system metrics . In our case we used two types of exporters Node exporter, the first
one is a Prometheus exporter for hardware and OS metrics and the second one is a database
exporter for PostgreSQL server metrics.
Prometheus will be installed on the Kubernetes cluster, and the url will be exposed via the
gateway.
The Figure 4.23 show the metric collected by Prometheus

Figure 4.23: Prometheus metrics

After collecting all of the metrics and storing them in our time-series database. It is now
time to visualize it. The following section will describe our Grafana implementation

4.6.3 Grafana Dashboard


Grafana is a web-based analytics and interactive visualization program that runs on a
variety of platforms. When connected to supported data sources, it produces web- based
charts, graphs, and alerts. In our case the data source will be our Prometheus server. Grafana
will be installed on the Kubernetes cluster, and the url will be exposed via the gateway.
The Figure 4.24 show the metric collected by Prometheus
Chapter 4. Implementation 52

Figure 4.24: Grafana dashboard

4.7 Conclusion
During this last chapter, we implemented our solution which was developed over four
stages: a stage for set up the environment , stage that consists of the implementation of a CI
pipeline, stage for the implementation of the multistage delivery workflows and a stage for
implementing the monitoring system. Afterwards, we have presented the various screenshots
for the realization of the work done.
General conclusion

We summarize the work done within the company Altersis at the end of this report. Indeed,
our effort is part of a project that aims to simplify, facilitate, and automate the integration,
deployment, and monitoring of applications in order to ensure their correct performance and
security. This report outlines all of the steps that we took to attain the desired outcome.
We began the first chapter of this report by describing the broad environment of our
project by presenting the host organization and explaining the challenge. Then we conducted
a review of the current solution and identified its flaws. We also shared the technique we used
throughout the creation of our project.
The second chapter was dedicated to determining the project’s functional and non-
functional requirements. After that, we used use-case diagrams with rich textual descriptions
to model this requirement study.
We described concepts of DevOps approach that our project relies on in the third chapter.
The system’s physical and logical architecture were then identified through a conceptual
analysis. The fourth chapter was all about putting our solution into action. We began by
setting out the necessary configuration. Then we demonstrated our solution’s user interfaces
and described its functions.
During this work, we discovered that, despite its performance and relevance, Keptn is an
unstable tool, with which we have encountered a number of issues (there are 290 open issues
in the main Github repository), and that there is a significant difference between the various
versions of Keptn.
As prospects, we may improve our offering by employing paid technologies connected into
GitLab to optimize our pipeline. We also propose that in the multi-stage delivery workflow,
another tool be used instead of Keptn.

53
Bibliography

[1] “Altersis performance.” Available on the site: https://www.altersis-performance.com/.


Accessed on 2022-05-16.

[2] “Waterfall or agile?.” Available on the site: http://ouriken.com/blog/which-one-is-right-


for-you-waterfall-or-agile/. Accessed on 2021-06-27.

[3] “Scrum process.” Available on the site: https://medium.com/@realjoselara/ agile-scrum-


process-in-a-nutshell-6ec32a59efb . Accessed on 3/06/2021.

[4] “Devops lifecycle.” Available on the site: https://www.altexsoft.com/blog/devops-tools.


Accessed on 2021-06-9.

[5] “Elements of a ci/cd pipeline.” Available on the site:


https://www.redhat.com/fr/topics/devops/what-cicd-pipeline.Accessed on 2021-
06-8.

[6] “Virtual machines (vms) and containers.” Available on the site:


https://labs.sogeti.com/virtual-machines-vms-and-containers. Accessed on 2021-
06-11.

[7] “Kubernetes components.” Available on the site:https://kumargaurav1247.medium.com/


components-of-kubernetes-architecture-6feea4d5c712. Accessed on 2021-06-25.

[8] “Docker networking.” Available on the site: https://www.edureka.co/blog/ docker-


networking/. Accessed on 2021-06-5.

[9] “Agile method.” Available on the site: https://agiliste.fr/introduction-methodes-agiles.


Accessed on 2021-05-15.

[10] “Scrum agile method.” Available on the site:https://www.cprime.com/resources/what-is


-agile-what-is-scrum/. Accessed on 2021-05-16.

[11] S. Vadapalli, DevOps: continuous delivery, integration, and deployment with DevOps:
dive into the core DevOps strategies. Packt Publishing Ltd, 2018.

[12] K. Ambily, “Devops basics and variations,” in Azure DevOps for Web Developers,
pp. 1–11, Springer, 2020.

54
Bibliography 55

[13] H. Saito, H.-C. C. Lee, and C.-Y. Wu, DevOps with Kubernetes: accelerating software
delivery with container orchestrators. Packt Publishing Ltd, 2019.

[14] J. Humble and D. Farley, Continuous delivery: reliable software releases through build,
test, and deployment automation. Pearson Education, 2010.

[15] “Sten pittet. continuous integration vs. continuous delivery vs. continuous deployment.
web-article. atlassian.” Available on the site: https://www.atlassian.com/continuous-
delivery/ci-vs-ci-vs-cd. Accessed on 2021-06-1.

[16] M. Hering, DevOps for the Modern Enterprise: Winning Practices to Transform Legacy
IT Organizations. IT Revolution, 2018.

[17] J. Arundel and J. Domingus, Cloud Native DevOps with Kubernetes: building, deploying,
and scaling modern applications in the Cloud. O’Reilly Media, 2019.

[18] Chen, “Continuous delivery: Huge benefits, but challenges too,” IEEE Software, vol. 128,
no. 1, pp. 72–86, 2015.

[19] D. Yan, A. A. Von Davier, and C. Lewis, Computerized multistage testing: Theory and
applications. CRC Press, 2016.

[20] “Jean-mathieu saponaro,monitoring in the kubernetes era.” Available on the site:


https://www.datadoghq.com/blog/monitoring-kubernetes-era/. Accessed on 2021-06-19.

[21] “Virtualization.” Available on the site: https://www.ibm.com/cloud/learn/ virtualization-


a-complete-guide.Accessed on 2021-05/28.

[22] “Kubernetes architecture.” Available on thesite:https://kubernetes.io/fr/docs/concepts


/architecture. Accessed on 2021-04/17.

[23] “Gitlab.” Available on the site: https://docs.gitlab.com/ .Accessed on 2021-06- 3.

[24] “Keptn: Cloud-native application life-cycle orchestration.” https://keptn.sh/.

[25] “Istio: The istio service mesh.” https://istio.io/.

[26] “What is docker.” Available on the site: https://opensource.com/resources/what-docker.


Accessed on 2021-06-6.

[27] “Helm: The kubernetes package manager.” https://github.com/helm/helm.

[28] “Kubernetes vs mesos vs swarm.” Available on the site:


https://www.sumologic.com/insight/kubernetes-vs-mesos-vs-swarm. Accessed
on 2021-06-15.

[29] “Helm.” Available on the site: https://helm.sh/docs/topics/charts/ . Accessed on


18/05/2021.
Bibliography 56

[30] “Snyk: The developer security platform.” https://snyk.io/what-is-snyk/.

[31] “Nagios: The industry standard in it infrastructure monitoring.” https://www.nagios.org/.

[32] “Zabbix: The enterprise-class open source network monitoring solution.”


https://www.zabbix.com/.

[33] “Grafana: The open observability platform.” https://grafana.com/.

You might also like