DevOps Cheat Sheet
DevOps Cheat Sheet
2. Version Control
● Git
● GitHub – Cloud-based Git repository hosting
● GitLab – Git repository with built-in CI/CD pipelines
● Bitbucket – Git repository with Jira integration
● Terraform
● CloudFormation (stacks, templates)
5. Cloud Services
● AWS
● Azure
● GCP
6. Configuration Management
● Networking Basics
● Ports
● Nginx (Reverse Proxy & Load Balancing)
● Apache (reverse proxy, load balancing)
● HAProxy (Load Balancing)
● Kubernetes Ingress Controller (For Managing External Traffic)
● Practical Examples: Docker for Nginx, Apache, HAProxy, and Kubernetes
Ingress
2. NoSQL Databases
1. File Management
pwd
top
df -h # Human-readable format
uptime
3. Package Management (Ubuntu/Debian)
ping google.com
curl https://example.com
ifconfig
6. Process Management
7. Disk Management
8. Text Processing
awk '{print $1}' file.txt # Print the first column of each line
rsync -avz /source /destination # Sync with compression and archive mode
11. Shell Scripting
15. Others
find /var -name "*.log" # Find all .log files under /var
● tee: Read from standard input and write to standard output and files.
echo "new data" | tee file.txt # Write output to file and terminal
env
2. Shell scripting
1. Automating Server Provisioning (AWS EC2 Launch)
#!/bin/bash
# Variables
INSTANCE_TYPE="t2.micro"
#!/bin/bash
CPU_THRESHOLD=80
fi
3. Backup Automation (MySQL Backup)
#!/bin/bash
# Variables
DB_USER="root"
DB_PASSWORD="password"
DB_NAME="my_database"
BACKUP_DIR="/backup"
DATE=$(date +%F)
mkdir -p $BACKUP_DIR
# Backup command
gzip $BACKUP_DIR/backup_$DATE.sql
#!/bin/bash
# Variables
LOG_DIR="/var/log/myapp"
ARCHIVE_DIR="/var/log/myapp/archive"
DAYS_TO_KEEP=30
mkdir -p $ARCHIVE_DIR
#!/bin/bash
# Jenkins details
JENKINS_URL="http://jenkins.example.com"
JOB_NAME="my-pipeline-job"
USER="your-username"
API_TOKEN="your-api-token"
#!/bin/bash
# Variables
NAMESPACE="default"
DEPLOYMENT_NAME="my-app"
IMAGE="my-app:v1.0"
# Deploy to Kubernetes
#!/bin/bash
# Variables
TF_DIR="/path/to/terraform/config"
cd $TF_DIR
bash
#!/bin/bash
# Variables
DB_USER="postgres"
DB_PASSWORD="password"
DB_NAME="my_database"
MIGRATION_FILE="/path/to/migration.sql"
#!/bin/bash
# Variables
USER_NAME="newuser"
GROUP_NAME="devops"
#!/bin/bash
OPEN_PORTS=$(netstat -tuln)
else
Fi
This script clears memory caches and restarts services to free up system resources.
#!/bin/bash
#!/bin/bash
pytest tests/
mvn test
This script automatically scales EC2 instances in an Auto Scaling group based on
CPU usage.
#!/bin/bash
fi
#!/bin/bash
export DB_HOST="prod-db.example.com"
export API_KEY="prod-api-key"
export DB_HOST="staging-db.example.com"
export API_KEY="staging-api-key"
else
export DB_HOST="dev-db.example.com"
export API_KEY="dev-api-key"
fi
15. Error Handling and Alerts
This script checks logs for errors and sends a Slack notification if an error is found.
#!/bin/bash
fi
This script installs Docker if it's not already installed on the system.
#!/bin/bash
# Install Docker
sudo sh get-docker.sh
fi
17. Configuration Management
This script updates configuration files (like nginx.conf) across multiple servers.
#!/bin/bash
This script checks the health of multiple web servers by making HTTP requests.
#!/bin/bash
curl -s --head http://$server | head -n 1 | grep "HTTP/1.1 200 OK" > /dev/null
if [ $? -ne 0 ]; then
else
done
This script removes files older than 30 days from the /tmp directory to free up disk
space.
#!/bin/bash
#!/bin/bash
This script automatically reboots the server during off-hours (between 2 AM and 4
AM).
#!/bin/bash
sudo reboot
fi
This script renews SSL certificates using certbot and reloads the web server.
#!/bin/bash
certbot renew
This script checks the CPU usage of a Docker container and scales it based on
usage.
#!/bin/bash
fi
This script verifies the integrity of backup files and reports any corrupted ones.
#!/bin/bash
else
echo "Backup $backup is valid"
fi
done
This script removes unused Docker images, containers, and volumes to save disk
space.
#!/bin/bash
This script pulls the latest changes from a Git repository and creates a release tag.
#!/bin/bash
# Pull latest changes from Git repository and create a release tag
This script reverts to the previous Docker container image if a deployment fails.
#!/bin/bash
if [ $? -ne 0 ]; then
docker-compose down
docker-compose up -d
fi
This script collects logs from multiple servers and uploads them to an S3 bucket.
#!/bin/bash
This script checks for available security patches and applies them automatically.
#!/bin/bash
#!/bin/bash
fi
#!/bin/bash
# Variables
ZONE_ID="your-hosted-zone-id"
DOMAIN_NAME="your-domain.com"
NEW_IP="your-new-ip-address"
"Changes": [
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "'$DOMAIN_NAME'",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
"Value": "'$NEW_IP'"
}'
#!/bin/bash
# Run ESLint
# Run Prettier
#!/bin/bash
# API URL
API_URL="https://your-api-endpoint.com/endpoint"
else
fi
#!/bin/bash
# Image to scan
IMAGE_NAME="your-docker-image:latest"
if [ $? -eq 1 ]; then
exit 1
else
fi
#!/bin/bash
THRESHOLD=80
fi
#!/bin/bash
# Target URL
URL="https://your-application-url.com"
ab -n 1000 -c 10 $URL
#!/bin/bash
Introduction: This script automates the process of updating DNS records in AWS
Route 53 when the IP address of a server changes. It ensures that DNS records are
updated dynamically when new servers are provisioned.
#!/bin/bash
# Variables
ZONE_ID="your-hosted-zone-id"
DOMAIN_NAME="your-domain.com"
NEW_IP="your-new-ip-address"
"Changes": [
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "'$DOMAIN_NAME'",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
"Value": "'$NEW_IP'"
}'
Introduction: This script runs ESLint and Prettier to check and automatically
format JavaScript code before deployment. It ensures code quality and consistency.
#!/bin/bash
# Run ESLint
#!/bin/bash
# API URL
API_URL="https://your-api-endpoint.com/endpoint"
else
Introduction: This script scans Docker images for known vulnerabilities using
Trivy. It ensures that only secure images are deployed in production.
#!/bin/bash
# Image to scan
IMAGE_NAME="your-docker-image:latest"
if [ $? -eq 1 ]; then
exit 1
else
fi
42. Disk Usage Monitoring and Alerts (Email Notification)
Introduction: This script monitors disk usage and sends an alert via email if the
disk usage exceeds a specified threshold. It helps in proactive monitoring of disk
space.
#!/bin/bash
THRESHOLD=80
fi
Introduction: This script runs load tests using Apache Benchmark (ab) to simulate
traffic on an application. It helps measure the performance and scalability of the
application.
bash
#!/bin/bash
# Target URL
URL="https://your-application-url.com"
ab -n 1000 -c 10 $URL
Introduction: This script generates a server health report using system commands
like top and sends it via email. It helps keep track of server performance and
health.
#!/bin/bash
Introduction: This script generates HTML documentation from Python code using
pdoc. It helps automate the process of creating up-to-date documentation from the
source code.
#!/bin/bash
crontab -l
crontab -e
EDITOR=nano crontab -e
# * * * * * command_to_execute
#┬┬┬┬┬
#│││││
* * * * * /path/to/script.sh
*/5 * * * * /path/to/script.sh
# Run a script every 10 minutes
*/10 * * * * /path/to/script.sh
0 0 * * * /path/to/script.sh
0 * * * * /path/to/script.sh
0 */2 * * * /path/to/script.sh
0 3 * * 0 /path/to/script.sh
0 9 1 * * /path/to/script.sh
0 18 * * 1-5 /path/to/script.sh
# Run a script on the first Monday of every month
# Run a script on specific dates (e.g., 1st and 15th of the month)
0 12 1,15 * * /path/to/script.sh
0 9-17 * * * /path/to/script.sh
@reboot /path/to/script.sh
@daily /path/to/script.sh
@weekly /path/to/script.sh
@monthly /path/to/script.sh
# Run a script yearly at midnight on January 1st
@yearly /path/to/script.sh
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
0 5 * * * /path/to/script.sh
Python
Python Basics
1. File Operations
● Read a file:
python
● Write to a file:
python
2. Environment Variables
python
import os
db_user = os.getenv('DB_USER')
print(db_user)
python
import os
os.environ['NEW_VAR'] = 'value'
3. Subprocess Management
python
import subprocess
result = subprocess.run(['ls', '-l'], capture_output=True, text=True)
print(result.stdout)
4. API Requests
python
import requests
response = requests.get('https://api.example.com/data')
print(response.json())
5. JSON Handling
python
import json
python
import json
6. Logging
● Basic logging setup:
python
import logging
logging.basicConfig(level=logging.INFO)
logging.info('This is an informational message')
python
import sqlite3
conn = sqlite3.connect('example.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS users (id INTEGER
PRIMARY KEY, name TEXT)')
conn.commit()
conn.close()
python
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('hostname', username='user', password='password')
● Try-except block:
python
try:
# code that may raise an exception
risky_code()
except Exception as e:
print(f'Error occurred: {e}')
python
import docker
client = docker.from_env()
containers = client.containers.list()
for container in containers:
print(container.name)
python
import yaml
import yaml
● Using argparse:
python
import argparse
args = parser.parse_args()
print(args.num)
python
import psutil
python
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/health', methods=['GET'])
def health_check():
return jsonify({'status': 'healthy'})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
python
import docker
client = docker.from_env()
container = client.containers.run('ubuntu', 'echo Hello World', detach=True)
print(container.logs())
python
import schedule
import time
def job():
print("Running scheduled job...")
schedule.every(1).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(1)
17. Version Control with Git
python
import git
repo = git.Repo('/path/to/repo')
repo.git.add('file.txt')
repo.index.commit('Added file.txt')
python
import smtplib
from email.mime.text import MIMEText
python
import os
import subprocess
# Create virtual environment
subprocess.run(['python3', '-m', 'venv', 'myenv'])
python
import requests
url = 'http://your-jenkins-url/job/your-job-name/build'
response = requests.post(url, auth=('user', 'token'))
print(response.status_code)
bash
python
import unittest
def add(a, b):
return a + b
class TestMathFunctions(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 3), 5)
if __name__ == '__main__':
unittest.main()
python
import pandas as pd
df = pd.read_csv('data.csv')
df['new_column'] = df['existing_column'] * 2
df.to_csv('output.csv', index=False)
python
import boto3
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name',
'Values': ['running']}])
for instance in instances:
print(instance.id, instance.state)
python
import requests
from bs4 import BeautifulSoup
response = requests.get('http://example.com')
soup = BeautifulSoup(response.content, 'html.parser')
print(soup.title.string)
python
python
import boto3
s3 = boto3.client('s3')
# Upload a file
s3.upload_file('local_file.txt', 'bucket_name', 's3_file.txt')
# Download a file
s3.download_file('bucket_name', 's3_file.txt', 'local_file.txt')
28. Monitoring Application Logs
python
import time
def tail_f(file):
file.seek(0, 2) # Move to the end of the file
while True:
line = file.readline()
if not line:
time.sleep(0.1) # Sleep briefly
continue
print(line)
python
import docker
client = docker.from_env()
container = client.containers.get('container_id')
print(container.attrs['State']['Health']['Status'])
import requests
import time
url = 'https://api.example.com/data'
while True:
response = requests.get(url)
if response.status_code == 200:
print(response.json())
break
elif response.status_code == 429: # Too Many Requests
time.sleep(60) # Wait a minute before retrying
else:
print('Error:', response.status_code)
break
python
import os
import subprocess
# Stop services
subprocess.run(['docker-compose', 'down'])
python
import subprocess
# Initialize Terraform
subprocess.run(['terraform', 'init'])
# Apply configuration
subprocess.run(['terraform', 'apply', '-auto-approve'])
python
import requests
response = requests.get('http://localhost:9090/metrics')
metrics = response.text.splitlines()
python
def test_add():
assert add(2, 3) == 5
python
app = Flask(__name__)
@app.route('/webhook', methods=['POST'])
def webhook():
data = request.json
print('Received data:', data)
return 'OK', 200
if __name__ == '__main__':
app.run(port=5000)
python
python
from cryptography.fernet import Fernet
# Generate a key
key = Fernet.generate_key()
cipher_suite = Fernet(key)
# Encrypt
encrypted_text = cipher_suite.encrypt(b'Secret Data')
# Decrypt
decrypted_text = cipher_suite.decrypt(encrypted_text)
print(decrypted_text.decode())
python
import sentry_sdk
sentry_sdk.init('your_sentry_dsn')
try:
divide(1, 0)
except ZeroDivisionError as e:
sentry_sdk.capture_exception(e)
yaml
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Run tests
run: |
pytest
python
app = FastAPI()
@app.get('/items/{item_id}')
async def read_item(item_id: int):
return {'item_id': item_id}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8000)
41. Log Aggregation with ELK Stack
python
es = Elasticsearch(['http://localhost:9200'])
python
import pandas as pd
# Extract
data = pd.read_csv('source.csv')
# Transform
data['new_column'] = data['existing_column'].apply(lambda x: x * 2)
# Load
data.to_csv('destination.csv', index=False)
import json
python
import redis
# Set a key
r.set('foo', 'bar')
# Get a key
print(r.get('foo'))
python
python
app = Flask(__name__)
api = Api(app)
class HelloWorld(Resource):
def get(self):
return {'hello': 'world'}
api.add_resource(HelloWorld, '/')
if __name__ == '__main__':
app.run(debug=True)
python
import asyncio
asyncio.run(main())
python
def packet_callback(packet):
print(packet.summary())
sniff(prn=packet_callback, count=10)
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
print(config['DEFAULT']['SomeSetting'])
config['DEFAULT']['NewSetting'] = 'Value'
with open('config.ini', 'w') as configfile:
config.write(configfile)
python
import websocket
ws = websocket.WebSocketApp("ws://echo.websocket.org",
on_message=on_message)
ws.run_forever()
python
import docker
client = docker.from_env()
# Dockerfile content
dockerfile_content = """
FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
"""
python
import psutil
python
alembic_cfg = config.Config("alembic.ini")
command.upgrade(alembic_cfg, "head")
python
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('hostname', username='user', password='your_password')
python
import boto3
cloudformation = boto3.client('cloudformation')
response = cloudformation.create_stack(
StackName='MyStack',
TemplateBody=template_body,
Parameters=[
{
'ParameterKey': 'InstanceType',
'ParameterValue': 't2.micro'
},
],
TimeoutInMinutes=5,
Capabilities=['CAPABILITY_NAMED_IAM'],
)
print(response)
python
import boto3
ec2 = boto3.resource('ec2')
# Start an instance
instance = ec2.Instance('instance_id')
instance.start()
# Stop an instance
instance.stop()
python
import shutil
import os
source_dir = '/path/to/source'
backup_dir = '/path/to/backup'
shutil.copytree(source_dir, backup_dir)
python
event_handler = MyHandler()
observer = Observer()
observer.schedule(event_handler, path='path/to/monitor', recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
python
class MyUser(HttpUser):
wait_time = between(1, 3)
@task
def load_test(self):
self.client.get('/')
python
import requests
url = 'https://api.github.com/repos/user/repo'
response = requests.get(url, headers={'Authorization': 'token
YOUR_GITHUB_TOKEN'})
repo_info = response.json()
print(repo_info)
python
import subprocess
# Get pods
subprocess.run(['kubectl', 'get', 'pods'])
# Apply a configuration
subprocess.run(['kubectl', 'apply', '-f', 'deployment.yaml'])
python
# test_example.py
def test_addition():
assert 1 + 1 == 2
python
import argparse
args = parser.parse_args()
print(args.accumulate(args.integers))
python
load_dotenv()
database_url = os.getenv('DATABASE_URL')
print(database_url)
65. Implementing Web Scraping with BeautifulSoup
python
import requests
from bs4 import BeautifulSoup
response = requests.get('http://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
python
import yaml
python
import pika
# Sending messages
connection =
pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
# Receiving messages
def callback(ch, method, properties, body):
print("Received:", body)
connection =
pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_consume(queue='hello', on_message_callback=callback,
auto_ack=True)
channel.start_consuming()
python
import sentry_sdk
sentry_sdk.init("YOUR_SENTRY_DSN")
try:
# Your code that may throw an exception
1/0
except Exception as e:
sentry_sdk.capture_exception(e)
python
python
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
engine = create_engine('sqlite:///example.db')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
# Create
new_user = User(name='Alice')
session.add(new_user)
python
import docker
client = docker.from_env()
containers = client.containers.list()
python
app = Flask(__name__)
@app.route('/api/data', methods=['GET'])
def get_data():
return jsonify({"message": "Hello, World!"})
if __name__ == '__main__':
app.run(debug=True)
python
import subprocess
# Renew certificates
subprocess.run(['certbot', 'renew'])
python
import numpy as np
python
import smtplib
from email.mime.text import MIMEText
sender = '[email protected]'
recipient = '[email protected]'
msg = MIMEText('This is a test email.')
msg['Subject'] = 'Test Email'
msg['From'] = sender
msg['To'] = recipient
python
import schedule
import time
def job():
print("Job is running...")
schedule.every(10).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(1)
python
import matplotlib.pyplot as plt
x = [1, 2, 3, 4, 5]
y = [2, 3, 5, 7, 11]
plt.plot(x, y)
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.title('Simple Plot')
plt.show()
markdown
my_package/
├── __init__.py
├── module1.py
└── module2.py
python
setup(
name='my_package',
version='0.1',
packages=find_packages(),
install_requires=[
'requests',
'flask'
],
)
79. Using pytest for Unit Testing
python
# test_sample.py
def add(a, b):
return a + b
def test_add():
assert add(1, 2) == 3
python
oauth = OAuth1Session(client_key='YOUR_CLIENT_KEY',
client_secret='YOUR_CLIENT_SECRET')
response = oauth.get('https://api.example.com/user')
print(response.json())
python
import pandas as pd
df = pd.read_csv('data.csv')
print(df.head())
# Filter data
filtered_df = df[df['column_name'] > 10]
print(filtered_df)
python
import requests
# GET request
response = requests.get('https://api.example.com/data')
print(response.json())
# POST request
data = {'key': 'value'}
response = requests.post('https://api.example.com/data', json=data)
print(response.json())
python
PORT = 8000
handler = SimpleHTTPRequestHandler
python
app = Flask(__name__)
@app.route('/webhook', methods=['POST'])
def webhook():
data = request.json
print(data)
return '', 200
if __name__ == '__main__':
app.run(port=5000)
python
import subprocess
python
import subprocess
subprocess.run(['docker-compose', 'up', '-d'])
python
import boto3
from moto import mock_s3
@mock_s3
def test_s3_upload():
s3 = boto3.client('s3', region_name='us-east-1')
s3.create_bucket(Bucket='my-bucket')
s3.upload_file('file.txt', 'my-bucket', 'file.txt')
# Test logic here
python
import asyncio
asyncio.run(main())
89. Using flask-cors for Cross-Origin Resource Sharing
python
app = Flask(__name__)
CORS(app)
@app.route('/data', methods=['GET'])
def data():
return {"message": "Hello from CORS!"}
if __name__ == '__main__':
app.run()
python
import pytest
@pytest.fixture
def sample_data():
data = {"key": "value"}
yield data # This is the test data
# Teardown code here (if necessary)
def test_sample_data(sample_data):
assert sample_data['key'] == 'value'
python
import http.client
conn = http.client.HTTPSConnection("www.example.com")
conn.request("GET", "/")
response = conn.getresponse()
print(response.status, response.reason)
data = response.read()
conn.close()
python
import redis
python
import json
python
import xml.etree.ElementTree as ET
tree = ET.parse('data.xml')
root = tree.getroot()
python
import venv
venv.create('myenv', with_pip=True)
python
import psutil
memory = psutil.virtual_memory()
print(f'Total Memory: {memory.total}, Available Memory:
{memory.available}')
97. Using sqlite3 for Lightweight Database Management
python
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
conn.close()
bash
import argparse
args = parser.parse_args()
print(args.accumulate(args.integers))
python
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer", "minimum": 0}
},
"required": ["name", "age"]
}
try:
validate(instance=data, schema=schema)
print("Data is valid")
except ValidationError as e:
print(f"Data is invalid: {e.message}")
2. Version Control
● Git
7. Undoing Changes
git config --global alias.st status # Create alias for status command
git config --global alias.co checkout # Create alias for checkout command
git config --global alias.br branch # Create alias for branch command
git config --global alias.cm commit # Create alias for commit command
git config --list | grep alias # View all configured aliases
2. GitHub
Authentication & Configuration
gh auth logout – Log out of GitHub CLI
gh config set editor <editor> – Set the default editor (e.g., nano, vim)
Repository Management
Webhooks:
3. GitLab
Commands
Webhooks:
○ Go to Settings → Webhooks
○ Select triggers: Push events, Tag push, Merge request, etc.
○ Use GitLab CI/CD with .gitlab-ci.yml
4. Bitbucket
Commands
Repository Management
Branch Management
Pipeline Management
Issue Tracking
Webhooks:
pipeline {
agent any
environment {
APP_ENV = 'production'
}
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
sh 'scp target/*.jar user@server:/deploy/'
}
}
}
}
node {
stage('Checkout') {
git 'https://github.com/your-repo.git'
}
stage('Build') {
sh 'mvn clean package'
}
stage('Test') {
sh 'mvn test'
}
stage('Deploy') {
sh 'scp target/*.jar user@server:/deploy/'
}
}
trigger {
cron('H 4 * * *') # Run at 4 AM every day
}
trigger {
pollSCM('H/5 * * * *') # Check SCM every 5 minutes
}
pipeline {
agent any
stages {
stage('Clone Repository') {
steps {
git 'https://github.com/user/repo.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
sh 'scp target/app.jar user@server:/deploy/path'
}
}
}
}
3. Kubernetes Deployment
groovy
pipeline {
agent any
stages {
stage('Deploy to Kubernetes') {
steps {
sh 'kubectl apply -f k8s/deployment.'
}
}
}
}
4. Terraform Deployment
groovy
pipeline {
agent any
stages {
stage('Terraform Init') {
steps {
sh 'terraform init'
}
}
stage('Terraform Apply') {
steps {
sh 'terraform apply -auto-approve'
}
}
}
}
pipeline {
agent any
stages {
stage('Scan with Trivy') {
steps {
sh 'trivy image my-app:latest'
}
}
}
}
pipeline {
agent any
environment {
SONAR_TOKEN = credentials('sonar-token')
}
stages {
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('SonarQube') {
sh 'mvn sonar:sonar -Dsonar.login=$SONAR_TOKEN'
}
}
}
}
}
GitHub Actions allows automation for CI/CD pipelines directly within GitHub
repositories.
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/<owner>/<repo>/actions/workflows/<workflow_file>/
dispatches \
-d '{"ref":"main"}'
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
🔹 Kubernetes Deployment
📌 File: .github/workflows/k8s.yml
name: Deploy to Kubernetes
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
🔹 Terraform Deployment
📌 File: .github/workflows/terraform.yml
name: Terraform Deployment
on:
push:
branches:
- main
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
scan:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
sonar:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
build:
stage: build
script:
- echo "Building application..."
- mvn clean package
artifacts:
paths:
- target/*.jar
test:
stage: test
script:
- echo "Running tests..."
- mvn test
deploy:
stage: deploy
script:
- echo "Deploying application..."
- scp target/*.jar user@server:/deploy/path
only:
- main
stages:
- build
- push
build:
stage: build
script:
- docker build -t $IMAGE_NAME:latest .
only:
- main
push:
stage: push
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER
--password-stdin $CI_REGISTRY
- docker push $IMAGE_NAME:latest
only:
- main
🔹 Kubernetes Deployment
📌 File: .gitlab-ci.yml
stages:
- deploy
deploy:
stage: deploy
image: bitnami/kubectl
script:
- kubectl apply -f k8s/deployment.
only:
- main
🔹 Terraform Deployment
📌 File: .gitlab-ci.yml
image: hashicorp/terraform:latest
stages:
- terraform
terraform:
stage: terraform
script:
- terraform init
- terraform apply -auto-approve
only:
- main
security_scan:
stage: security_scan
script:
- docker pull registry.gitlab.com/your-namespace/your-repo:latest
- trivy image registry.gitlab.com/your-namespace/your-repo:latest
only:
- main
stages:
- analysis
sonarqube:
stage: analysis
script:
- mvn sonar:sonar -Dsonar.login=$SONAR_TOKEN
only:
- main
🔹 AWS S3 Upload
📌 File: .gitlab-ci.yml
stages:
- deploy
deploy_s3:
stage: deploy
script:
- aws s3 sync . s3://my-bucket-name --delete
only:
- main
environment:
name: production
🔹 Notify on Slack
📌 File: .gitlab-ci.yml
notify:
stage: notify
script:
- curl -X POST -H 'Content-type: application/json' --data '{"text":"Deployment
completed successfully!"}' $SLACK_WEBHOOK_URL
only:
- main
🔹 Tekton
🔹 What is Tekton?
Tekton is a Kubernetes-native CI/CD framework that allows you to create and
run pipelines for automating builds, testing, security scans, and deployments. It
provides reusable components such as Tasks, Pipelines, and PipelineRuns,
making it ideal for cloud-native DevOps workflows.
🔹 Tekton Basics
● Tasks: The smallest execution unit in Tekton.
● Pipelines: A sequence of tasks forming a CI/CD process.
● PipelineRuns: Executes a pipeline.
● TaskRuns: Executes a task.
● Workspaces: Used for sharing data between tasks.
● Resources: Defines input/output artifacts (e.g., Git repositories, images).
🔹 Install Tekton on Kubernetes
kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/latest/release.
Verify installation:
Commands:
# Install Tekton CLI
kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
Apply:
Apply:
🔹 Tekton PipelineRun
📌 File: pipelinerun.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: sample-pipelinerun
spec:
pipelineRef:
name: sample-pipeline
🔹 Notify on Slack
📌 File: task-slack.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: slack-notify
spec:
steps:
- name: send-slack-message
image: curlimages/curl:latest
script: |
#!/bin/sh
curl -X POST -H 'Content-type: application/json' --data '{"text":"Deployment
completed successfully!"}' $SLACK_WEBHOOK_URL
Circle CI
Introduction
Installation
● Sign up at CircleCI
● Connect your repository (GitHub, Bitbucket)
● Configure the .circleci/config.yml file in your project
# Define jobs
jobs:
build:
docker:
- image: circleci/python:3.8 # Use a Python 3.8 Docker image
steps:
- checkout # Check out the code
- run:
name: Install dependencies
command: pip install -r requirements.txt
- run:
name: Run tests
command: pytest
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy application
command: ./deploy.sh # Custom deploy script
# Define workflows (Job execution order)
workflows:
version: 2
build_and_deploy:
jobs:
- build
- deploy:
requires:
- build # Ensure deployment happens after build succeeds
jobs:
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy to Production
command: ./deploy_production.sh
workflows:
version: 2
deploy_to_production:
jobs:
- deploy:
filters:
branches:
only: main # Deploy only on the 'main' branch
2. Caching Dependencies to Speed Up Builds
jobs:
build:
docker:
- image: circleci/python:3.8
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- run:
name: Install dependencies
command: pip install -r requirements.txt
- save_cache:
paths:
- ~/.cache/pip # Save pip cache
key: v1-dependencies-{{ checksum "requirements.txt" }}
jobs:
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy using environment variables
command: ./deploy.sh
environment:
API_KEY: $API_KEY # Use stored API keys
jobs:
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy Application
command: ./deploy.sh
filters:
branches:
only: main
requires:
- build
when:
changes:
- Dockerfile # Only run deploy if the Dockerfile changes
jobs:
test:
docker:
- image: circleci/python:3.8
parallelism: 4 # Run 4 test jobs in parallel
steps:
- checkout
- run:
name: Run tests
command: pytest
jobs:
build:
docker:
- image: circleci/python:3.8
- image: circleci/postgres:13 # Additional container for PostgreSQL
environment:
POSTGRES_USER: circleci
steps:
- checkout
- run:
name: Install dependencies
command: pip install -r requirements.txt
- run:
name: Run database migrations
command: python manage.py migrate
- run:
name: Run tests
command: pytest
jobs:
manual_deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy to Production
command: ./deploy.sh
when: manual # Only run when triggered manually
ArgoCD (GitOps)
Introduction
Installation
macOS:
brew install argocd
Linux:
curl -sSL -o /usr/local/bin/argocd
https://github.com/argoproj/argo-cd/releases/download/v2.5.4/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
Argo CD Commands
Login to Argo CD via CLI
argocd login <ARGOCD_SERVER> --username admin --password <password>
argocd app refresh <app-name>
Managing Projects
Create a Project
argocd proj create <project-name> \
--description "<description>" \
--dest-namespace <namespace> \
--dest-server <server-url>
List Projects
argocd proj list
List Repositories
argocd repo list
Best Practices
Introduction
Flux CD is a GitOps tool for Kubernetes that automates deployment, updates, and
rollback of applications using Git as the source of truth.
Installation
Install Flux CLI
curl -s https://fluxcd.io/install.sh | sudo
Verify Installation
flux --version
General Commands
Managing Deployments
flux get sources git # List Git sources
flux get kustomizations # List kustomizations
flux reconcile kustomization <name> # Force sync a kustomization
flux suspend kustomization <name> # Pause updates for a kustomization
flux resume kustomization <name> # Resume updates for a kustomization
Uninstall Flux
flux uninstall --silent
Commands:
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o
/usr/share/keyrings/hashicorp-archive-keyring.gpg
Commands:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
3. Kubectl Installation on Ubuntu:
Commands:
curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
required_providers {
aws = {
source = "hashicorp/aws"
provider "aws" {
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-08d70e59c07c61a3a"
instance_type = "t2.micro"
tags = {
Name = var.instance_name
Example:
hcl
variable "instance_name" {
type = string
default = "ExampleAppServerInstance"
Example:
hcl
output "instance_id" {
value = aws_instance.app_server.id
}
output "instance_public_ip" {
value = aws_instance.app_server.public_ip
Initialize Terraform:
terraform init
1. Provider Configuration:
provider "aws" {
region = "us-west-2"
2. Resource Creation:
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "ExampleInstance"
3. Variable Management:
variable "region" {
default = "us-west-2"
provider "aws" {
region = var.region
4. State Management:
backend "s3" {
bucket = "my-tfstate-bucket"
key = "terraform/state"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-locks"
5. Modules:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
1. CloudFormation Concepts
Resources:
MyBucket:
Type: "AWS::S3::Bucket"
MyEC2Instance:
Type: "AWS::EC2::Instance"
Properties:
InstanceType: "t2.micro"
ImageId: "ami-0abcdef1234567890"
Outputs:
InstanceID:
Stack Operations
Drift Detection
Parameters:
InstanceType:
Type: String
Default: "t2.micro"
RegionMap:
us-east-1:
AMI: "ami-12345678"
us-west-1:
AMI: "ami-87654321"
Resources:
MyDatabase:
Type: "AWS::RDS::DBInstance"
Condition: IsProd
S3BucketName:
Export:
Name: MyBucketExport
5. CloudFormation Troubleshooting
Basic Commands
Image Commands
Container Commands
Volume Commands
Dockerfile Commands
1. Minimize Layers: Combine RUN, COPY, and ADD commands to reduce
layers and image size.
2. Use Specific Versions: Always specify versions for base images (e.g.,
FROM python:3.9-slim).
3. .dockerignore: Use .dockerignore to exclude unnecessary files (e.g., .git,
node_modules).
4. Multi-Stage Builds: Separate the build process and runtime environment to
optimize image size.
5. Non-root User: Always create and use a non-root user for security.
6. Leverage Docker Cache: Copy dependencies first, so Docker can cache
them for faster builds.
1. Python (Flask/Django)
dockerfile
WORKDIR /app
# Install dependencies
COPY requirements.txt .
COPY . .
EXPOSE 5000
USER appuser
Best Practices:
2. Node.js
dockerfile
WORKDIR /app
# Install dependencies
COPY . .
EXPOSE 3000
RUN addgroup --system app && adduser --system --ingroup app app
USER app
CMD ["node", "app.js"]
Best Practices:
dockerfile
WORKDIR /app
EXPOSE 8080
RUN addgroup --system app && adduser --system --ingroup app app
USER app
Best Practices:
4. Ruby on Rails
dockerfile
FROM ruby:3.0-alpine
# Install dependencies
WORKDIR /app
COPY . .
EXPOSE 3000
RUN addgroup --system app && adduser --system --ingroup app app
USER app
Best Practices:
5. Go
dockerfile
COPY . .
FROM alpine:latest
WORKDIR /app
EXPOSE 8080
RUN addgroup --system app && adduser --system --ingroup app app
USER app
CMD ["./myapp"]
Best Practices:
6. Angular (Frontend)
dockerfile
# Build stage
WORKDIR /app
COPY . .
FROM nginx:alpine
EXPOSE 80
Best Practices:
7. PHP (Laravel)
dockerfile
FROM php:8.0-fpm
# Install dependencies
RUN apt-get update && apt-get install -y libzip-dev && docker-php-ext-install zip
WORKDIR /var/www/html
# Install Composer
COPY . .
EXPOSE 9000
USER appuser
CMD ["php-fpm"]
Best Practices:
● Minimize Image Size: Use smaller base images like alpine or slim, and
multi-stage builds to reduce the final image size.
● Use a Non-root User: Always run applications as a non-root user to enhance
security.
● Pin Versions: Avoid using the latest tag for images. Use specific versions to
ensure predictable builds.
● Leverage Caching: Place frequently changing files (e.g., source code) after
dependencies to take advantage of Docker's build cache.
● Avoid ADD Unless Necessary: Use COPY instead of ADD unless you need
to fetch files from a URL or extract archives.
services:
app:
image: my-app:latest
container_name: my_app
ports:
- "8080:80"
environment:
- NODE_ENV=production
volumes:
- ./app:/usr/src/app
depends_on:
- db
db:
image: postgres:latest
container_name: my_db
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydatabase
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Key Directives
version: '3'
services:
service_name:
ports:
environment:
volumes:
- <host-path>:<container-path> # Mount volumes for persistent data
depends_on:
networks:
version: '3'
services:
web:
build: ./app
ports:
- "5000:5000"
environment:
- FLASK_APP=app.py
- FLASK_ENV=development
volumes:
- ./app:/app
networks:
- app_network
redis:
image: "redis:alpine"
networks:
- app_network
networks:
app_network:
driver: bridge
version: '3'
services:
app:
build: ./node-app
ports:
- "3000:3000"
environment:
- MONGO_URI=mongodb://mongo:27017/mydb
depends_on:
- mongo
networks:
- backend
mongo:
image: mongo:latest
volumes:
- mongo_data:/data/db
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongo_data:
version: '3'
services:
nginx:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./html:/usr/share/nginx/html
ports:
- "8080:80"
depends_on:
- php
networks:
- frontend
php:
image: php:8.0-fpm
volumes:
- ./html:/var/www/html
networks:
- frontend
networks:
frontend:
driver: bridge
Best Practices
● Use Versioning: Always specify a version for Docker Compose files (e.g.,
version: '3')
● Define Volumes: Use named volumes for persistent data (e.g., database
storage)
● Environment Variables: Use environment variables for configuration (e.g.,
database connection strings)
● Use depends_on: Ensure proper start order for dependent services
● Custom Networks: Use custom networks for better service communication
management
● Avoid latest Tag: Always use specific version tags for predictable builds
Advanced Options
build:
context: .
args:
NODE_ENV: production
services:
web:
image: my-web-app
healthcheck:
interval: 30s
retries: 3
● Kubernetes (K8s)
1. Kubernetes Basics
kubectl cluster-info – Display cluster information
kubectl get nodes – List all nodes in the cluster
kubectl get pods – List all pods in the current namespace
kubectl get services – List all services
kubectl get deployments – List all deployments
2. Managing Pods
3. Managing Deployments
4. Managing Services
5. Namespaces
kubectl get ns – List all namespaces
kubectl create namespace dev – Create a new namespace
kubectl delete namespace dev – Delete a namespace
7. Troubleshooting
Autoscaling
Kubernetes Debugging
1. Pod
yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
2. Deployment
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
3. ReplicaSet
yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:latest
ClusterIP (default)
yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
NodePort
yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
LoadBalancer
yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
5. ConfigMap
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
key1: value1
key2: value2
6. Secret
yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
9. Ingress
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
11. CronJob
yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
spec:
spec:
template:
spec:
containers:
- name: my-cron
image: busybox
restartPolicy: OnFailure
5. Cloud Services
● AWS
AWS CloudFormation
AWS CodePipeline
AWS CodeBuild
AWS CodeDeploy
Amazon VPC
Amazon Route 53
aws cloudtrail describe-trails # List AWS CloudTrail logs for security auditing
● Azure
2. Azure Storage
4. Azure Functions
6. Azure Networking
Load Balancer
Azure Pipelines
Azure Monitor
● GCP
Cloud Logging
Cloud Monitoring
Firewall Rules
Load Balancers
1. Ansible Basics
Check version:
ansible --version
Check inventory:
ansible-inventory --list -y
Custom inventory:
ansible -i inventory.ini all -m ping
[db]
3. Ad-Hoc Commands
Run as a specific user:
ansible all -m ping -u ubuntu --become
4. Playbook Structure
hosts: web
become: yes
tasks:
apt:
name: nginx
state: present
apt:
state: present
hosts: web
become: yes
tasks:
apt:
name: nginx
state: present
handlers:
service:
name: nginx
state: restarted
apt:
state: present
loop:
- nginx
- curl
- git
Conditional execution:
- name: Restart service only if Nginx is installed
service:
name: nginx
state: restarted
Create a role:
Ansible-galaxy init my_role
Run a role in a playbook:
- hosts: web
roles:
- my_role
Debug a variable:
- debug:
Ansible Playbook
1. Playbook Structure
hosts: all
become: yes
tasks:
debug:
hosts: web
become: yes
apt:
name: nginx
state: present
become_user: root
hosts: web
become: yes
tasks:
apt:
name: nginx
state: present
● Common Modules
○ command: Run shell commands
○ copy: Copy files
○ service: Manage services
○ user: Manage users
○ file: Set file permissions
4. Using Variables
package_name: nginx
apt:
state: present
include_vars: vars.yml
5. Conditionals
service:
name: nginx
state: restarted
6. Loops
apt:
state: present
loop:
- nginx
- git
- curl
7. Handlers
apt:
name: nginx
state: present
handlers:
name: nginx
state: restarted
debug:
Check syntax:
ansible-playbook playbook.yml --syntax-check
Dry run:
ansible-playbook playbook.yml --check
Create a role:
ansible-galaxy init my_role
roles:
- my_role
Basic Concepts
Commands
Example Recipe
package 'nginx' do
action :install
end
service 'nginx' do
end
file '/var/www/html/index.html' do
Basic Concepts
Commands
Example Manifest
puppet
class nginx {
package { 'nginx':
service { 'nginx':
file { '/var/www/html/index.html':
include nginx
Basic Concepts
nginx:
pkg.installed: []
service.running:
- enable: true
/var/www/html/index.html:
file.managed:
- source: salt://webserver/index.html
- mode: 644
yaml
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
- job_name: 'kubernetes'
static_configs:
- targets: ['kube-state-metrics:8080']
route:
receiver: 'slack'
receivers:
- name: 'slack'
slack_configs:
- channel: '#alerts'
send_resolved: true
api_url: 'https://hooks.slack.com/services/your_webhook_url'
(1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) *
100 # Memory usage
Index Management
2. Logstash Commands
yaml
input {
file {
path => "/var/log/syslog"
filter {
grok {
output {
elasticsearch {
3. Kibana Commands
Kibana CLI Commands
● Open /etc/logstash/logstash.conf
● Ensure the output points to Elasticsearch:
yaml
output {
elasticsearch {
● Restart Logstash:
● Go to Kibana → Discover
● Select logstash-* Data View
● Apply Filters & View Logs
tail -f /var/log/logstash/logstash-plain.log
journalctl -u logstash -f
tail -f /var/log/kibana/kibana.log
journalctl -u kibana -f
Datadog
logs_enabled: true
# Metric Queries
-H "Content-Type: application/json" \
api_key: "<YOUR_API_KEY>"
site: "datadoghq.com"
logs_enabled: true
apm_config:
enabled: true
logs_enabled: true
kubectl apply -f
https://raw.githubusercontent.com/DataDog/datadog-agent/main/Dockerfiles/manif
ests/agent.yaml
-e DD_API_KEY=<YOUR_API_KEY> \
-e DD_LOGS_ENABLED=true \
-e DD_CONTAINER_EXCLUDE="name:datadog-agent" \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
datadog/agent
-H "DD-API-KEY: <API_KEY>" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: <API_KEY>" \
-H "Content-Type: application/json" \
--data '{
"locations": ["aws:us-east-1"],
"tags": ["env:prod"],
"type": "api"
}'
-H "Content-Type: application/json" \
-H "DD-API-KEY: <API_KEY>" \
--data '{
"widgets": [
"definition": {
"type": "timeseries",
"requests": [
{ "q": "avg:system.cpu.user{*}" }
}'
# Create an alert
-H "Content-Type: application/json" \
-H "DD-API-KEY: <API_KEY>" \
--data '{
"tags": ["env:prod"]
}'
-H "Content-Type: application/json" \
-H "DD-API-KEY: <API_KEY>" \
--data '{
"data": {
"type": "incidents",
"attributes": {
"customer_impact_scope": "global",
"customer_impact_duration": 30,
"severity": "critical",
"state": "active",
"commander": "DevOps Team"
}'
New Relic
newrelic install
curl -X POST
"https://api.newrelic.com/v2/applications/<APP_ID>/deployments.json" -H
"X-Api-Key:<API_KEY>" -H "Content-Type: application/json" -d '{
"deployment": { "revision": "1.0.1", "description": "New deployment", "user":
"DevOps Team" } }' # Record a deployment
license_key: "<YOUR_LICENSE_KEY>"
log_file: /var/log/newrelic-infra.log
custom_attributes:
environment: production
Edit /etc/newrelic-infra.yml:
logs:
enabled: true
include:
- /var/log/syslog
- /var/log/nginx/access.log
# 1. Install SonarQube
# 5. Configure SonarScanner
# Example: sonar.host.url=http://localhost:9000
sonar-scanner -Dsonar.projectKey=<project_key>
-Dsonar.sources=<source_directory> -Dsonar.host.url=http://localhost:9000
<plugins>
<plugin>
<groupId>org.sonarsource.scanner.maven</groupId>
<artifactId>sonar-maven-plugin</artifactId>
<version>3.9.0.1100</version>
</plugin>
</plugins>
</build>
# Install SonarQube Scanner Plugin in Jenkins (Manage Jenkins > Manage Plugins
> Available > SonarQube Scanner for Jenkins)
pipeline {
agent any
environment {
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
stage('SonarQube Analysis') {
steps {
script {
withSonarQubeEnv('SonarQubeServer') {
sh 'mvn sonar:sonar'
}
# 10. GitLab CI/CD Integration for SonarQube
stages:
- code_analysis
sonarqube_scan:
stage: code_analysis
image: maven:3.8.7-openjdk-17
script:
variables:
SONAR_HOST_URL: "http://sonarqube-server:9000"
SONAR_TOKEN: "your-sonarqube-token"
on:
push:
branches:
- main
jobs:
sonar_scan:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v4
uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
env:
SONAR_HOST_URL: "http://sonarqube-server:9000"
apiVersion: batch/v1
kind: Job
metadata:
name: sonarqube-analysis
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
containers:
- name: sonar-scanner
image: maven:3.8.7-openjdk-17
env:
- name: SONAR_HOST_URL
value: "http://sonarqube-server:9000"
- name: SONAR_TOKEN
valueFrom:
secretKeyRef:
name: sonar-secret
key: sonar-token
restartPolicy: Never
2. Trivy (Container Vulnerability Scanning)
Basic Commands
Jenkins Integration
groovy
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
stage('Trivy Scan') {
steps {
yaml
stages:
- security_scan
trivy_scan:
stage: security_scan
image: aquasec/trivy
script:
artifacts:
paths:
- trivy_report.json
on:
push:
branches:
- main
jobs:
trivy_scan:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v4
run: |
uses: actions/upload-artifact@v4
with:
name: trivy-report
path: trivy_report.json
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: trivy-scan
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
containers:
- name: trivy-scanner
image: aquasec/trivy
restartPolicy: Never
Kubernetes Integration (Admission Controller)
yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: trivy-webhook
webhooks:
- name: trivy-scan.k8s
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE"]
resources: ["pods"]
clientConfig:
service:
name: trivy-webhook-service
namespace: security
path: /validate
admissionReviewVersions: ["v1"]
sideEffects: None
Basic Commands
Jenkins Integration
groovy
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
steps {
sh 'mvn org.owasp:dependency-check-maven:check'
yaml
stages:
- security_scan
owasp_dependency_check:
stage: security_scan
image: maven:3.8.7-openjdk-17
script:
- mvn org.owasp:dependency-check-maven:check
artifacts:
paths:
- target/dependency-check-report.html
GitHub Actions Integration
yaml
on:
push:
branches:
- main
jobs:
owasp_dependency_check:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v4
uses: actions/upload-artifact@v4
with:
name: owasp-report
path: target/dependency-check-report.html
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: owasp-dependency-check
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
containers:
- name: owasp-check
image: maven:3.8.7-openjdk-17
restartPolicy: Never
9. Networking, Ports & Load Balancing
Networking Basics
● IP Addressing
○ Private IPs: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
○ Public IPs: Assigned by ISPs
○ CIDR Notation: 192.168.1.0/24 (Subnet Mask: 255.255.255.0)
● Ports
○ HTTP: 80
○ HTTPS: 443
○ SSH: 22
○ DNS: 53
○ FTP: 21
○ MySQL: 3306
○ PostgreSQL: 5432
● Protocols
○ TCP (Reliable, connection-based)
○ UDP (Fast, connectionless)
○ ICMP (Used for ping)
○ HTTP(S), FTP, SSH, DNS
2. Network Commands
Linux Networking
Show network interfaces
ip a # Show IP addresses
Check connectivity
ping google.com
Trace route
traceroute google.com
DNS lookup
nslookup google.com
dig google.com
Test ports
telnet google.com 80
Allow SSH
sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
Block an IP
sudo iptables -A INPUT -s 192.168.1.100 -j DROP
Netcat (nc)
Start a simple TCP listener
nc -lvp 8080
3. Kubernetes Networking
List services and their endpoints
kubectl get svc -o wide
4. Docker Networking
List networks
docker network ls
Inspect a network
docker network inspect bridge
AWS
List VPCs
aws ec2 describe-vpcs
List subnets
aws ec2 describe-subnets
Azure
List VNets
az network vnet list -o table
List NSGs
az network nsg list -o table
● Definition: A logically isolated section of the AWS Cloud where you can
launch AWS resources in a virtual network.
● CIDR Block: Define the IP range (e.g., 10.0.0.0/16).
● Components:
○ Subnets: Divide your VPC into public (with internet access) and
private (without direct internet access) segments.
○ Route Tables: Control the traffic routing for subnets.
○ Internet Gateway (IGW): Allows communication between instances
in your VPC and the internet.
○ NAT Gateway/Instance: Enables outbound internet access for
instances in private subnets.
○ VPC Peering: Connects multiple VPCs.
○ VPN Connections & Direct Connect: Securely link your
on-premises network with your VPC.
○ VPC Endpoints: Privately connect your VPC to supported AWS
services.
● Definition: Virtual firewalls that control inbound and outbound traffic for
your EC2 instances.
● Key Characteristics:
○ Stateful: Return traffic is automatically allowed regardless of
inbound/outbound rules.
○ Default Behavior: All outbound traffic is allowed; inbound is denied
until explicitly allowed.
● Rule Components:
○ Protocol: (TCP, UDP, ICMP, etc.)
○ Port Range: Specific ports or a range (e.g., port 80 for HTTP).
○ Source/Destination: IP addresses or CIDR blocks (e.g., 0.0.0.0/0 for
all).
● Usage:
○ Assign one or more security groups to an instance.
○ Modify rules anytime without stopping or restarting the instance.
VPC Operations
Create a VPC:
aws ec2 create-vpc --cidr-block 10.0.0.0/16
Create a Subnet:
aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.0.1.0/24
Ports
🔹 Databases
● PostgreSQL - 5432 (Relational database)
● MySQL/MariaDB - 3306 (Relational database)
● MongoDB - 27017 (NoSQL database)
● Redis - 6379 (In-memory database)
● Cassandra - 9042 (NoSQL distributed database)
● CockroachDB - 26257 (Distributed SQL database)
● Neo4j - 7474 (Graph database UI), 7687 (Bolt protocol)
● InfluxDB - 8086 (Time-series database)
● Couchbase - 8091 (Web UI), 11210 (Data access)
✅
helps:
✅
Improve security by hiding backend servers.
✅
Handle traffic and reduce load on backend servers.
Improve performance with caching and compression.
✅
Load Balancing distributes traffic across multiple servers to:
✅
Prevent overloading of a single server.
✅
Ensure high availability (if one server fails, others handle traffic).
Improve speed and performance.
upstream backend_servers {
server server1.example.com; # Backend Server 1
server server2.example.com; # Backend Server 2
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_servers; # Send traffic to multiple backend servers
}
}
<VirtualHost *:80>
ServerName example.com
🔹 Install HAProxy
apt install haproxy # Ubuntu/Debian
yum install haproxy # RHEL/CentOS
backend backend_servers
balance roundrobin # Distribute traffic evenly
server server1 server1.example.com:80 check # First server
server server2 server2.example.com:80 check # Second server
🔹 Restart HAProxy
systemctl restart haproxy
systemctl enable haproxy # Enable on startup
app = Flask(__name__)
@app.route('/')
def home():
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
app = Flask(__name__)
@app.route('/')
def home():
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
FROM python:3.9
WORKDIR /app
nginx
events {}
http {
upstream backend_servers {
server server1:5000;
server server2:5000;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://backend_servers;
version: '3'
services:
server1:
build: .
container_name: server1
ports:
- "5001:5000"
server2:
build: .
container_name: server2
ports:
- "5002:5000"
nginx:
image: nginx:latest
container_name: nginx_proxy
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- server1
- server2
Step 5 Run the Containers
docker-compose up --build
curl http://localhost
<VirtualHost *:80>
ServerName localhost
<Proxy "balancer://mycluster">
BalancerMember "http://server1:5000"
BalancerMember "http://server2:5000"
</Proxy>
</VirtualHost>
version: '3'
services:
server1:
build: .
container_name: server1
ports:
- "5001:5000"
server2:
build: .
container_name: server2
ports:
- "5002:5000"
apache:
image: httpd:latest
container_name: apache_proxy
ports:
- "80:80"
volumes:
- ./apache.conf:/usr/local/apache2/conf/httpd.conf
depends_on:
- server1
- server2
docker-compose up --build
frontend http_front
bind *:80
default_backend backend_servers
backend backend_servers
balance roundrobin
version: '3'
services:
server1:
build: .
container_name: server1
ports:
- "5001:5000"
server2:
build: .
container_name: server2
ports:
- "5002:5000"
haproxy:
image: haproxy:latest
container_name: haproxy_loadbalancer
ports:
- "80:80"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
depends_on:
- server1
- server2
docker-compose up --build
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/pro
vider/cloud/deploy.
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
Comparison Table
Nginx: Reverse Proxy (forwards requests to backend servers), Load Balancing
(distributes traffic across servers)
Apache: Reverse Proxy (similar to Nginx), Load Balancing (balances traffic using
a balancer)
Database Management
SHOW DATABASES; → List databases
CREATE DATABASE db_name; → Create a database
DROP DATABASE db_name; → Delete a database
USE db_name; → Select a database
User Management
CREATE USER 'devops'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON db_name.* TO 'devops'@'localhost';
2. NoSQL Databases
MongoDB
show dbs; → List databases
use mydb; → Select a database
db.createCollection("users"); → Create a collection
db.users.insertOne({name: "Alice"}); → Insert data
mongodump --out /backup/ → Backup
Redis
SET key "value"; → Store a key
GET key; → Retrieve value
DEL key; → Delete key
Cassandra (CQL)
CREATE KEYSPACE mykeyspace WITH replication = {'class': 'SimpleStrategy',
'replication_factor': 1};
CREATE TABLE users (id UUID PRIMARY KEY, name TEXT);
3. Database Automation
identifier = "devops-db"
engine = "mysql"
instance_class = "db.t3.micro"
allocated_storage = 20
yaml
hosts: db_servers
become: yes
tasks:
groovy
pipeline {
agent any
stages {
stage('Backup') {
stage('Restore') {
yaml
version: '3.8'
services:
mongo:
image: mongo
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: DevOps@123
ports:
- "27017:27017"
✔ Block Storage – Used for databases, VMs, containers (e.g., EBS, Cinder)
✔ File Storage – Used for shared access & persistence (e.g., NFS, EFS)
✔ Object Storage – Used for backups, logs, and media (e.g., S3, MinIO)
# AWS S3
# Linux Backup
# AWS Backup
YAML Configurations:
Persistent Volume (PV)
yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
Persistent Volume Claim (PVC)
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
yaml
apiVersion: v1
kind: Pod
metadata:
name: storage-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage-volume
volumes:
- name: storage-volume
persistentVolumeClaim:
claimName: my-pvc
Terraform Configurations:
AWS S3 Bucket
provider "aws" {
region = "us-east-1"
bucket = "devops-backup-bucket"
acl = "private"
output "bucket_name" {
value = aws_s3_bucket.devops_bucket.id
}
provider "azurerm" {
features {}
name = "devopsstorageacc"
resource_group_name = "devops-rg"
account_tier = "Standard"
account_replication_type = "LRS"
# Helm Basics
# Uninstalling an Application