0% found this document useful (0 votes)
69 views35 pages

CISO Toolkit 02

The document outlines various Linux logging tools and commands, including log files in /var/log/, logrotate for log management, and the Linux Audit Framework (auditctl and ausearch) for security auditing. It emphasizes the importance of these tools for monitoring system activity, detecting suspicious behavior, and ensuring compliance with regulations. Additionally, it provides terminal usage syntax, examples, and strategic contexts for effective log management and security practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views35 pages

CISO Toolkit 02

The document outlines various Linux logging tools and commands, including log files in /var/log/, logrotate for log management, and the Linux Audit Framework (auditctl and ausearch) for security auditing. It emphasizes the importance of these tools for monitoring system activity, detecting suspicious behavior, and ensuring compliance with regulations. Additionally, it provides terminal usage syntax, examples, and strategic contexts for effective log management and security practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

🔧 Tool/Command Name: /var/log/* analysis (auth.

log, secure, messages, syslog)

🎯 What it is: Linux systems generate a variety of log files in the /var/log/ directory, each serving a
specific purpose. These plain-text files record events from the operating system, applications, and
services, providing a historical record of system activity, errors, and security-related events. Key files
include:

auth.log (Debian/Ubuntu) or secure (RHEL/CentOS): Records authentication attempts,


including successful and failed logins, sudo usage, and other security-related events.

messages (RHEL/CentOS) or syslog (Debian/Ubuntu): Contains general system activity logs,


including boot messages, kernel events, and messages from various system services.

💻 Terminal Usage (Syntax):


View the end of a log file (real-time): tail -f /var/log/auth.log

View the entire log file: cat /var/log/syslog

Search for specific patterns: grep 'Failed password' /var/log/auth.log

Combine with awk or cut for parsing: grep 'Accepted password' /var/log/auth.log |
awk '{print $11}' (to extract IP address)

📤 Output (Labeled Example): ``` // Example from /var/log/auth.log Jul 25 10:30:05 my-server
sshd[1234]: Accepted password for user admin from 192.168.1.100 port 54321 ssh2 Jul 25 10:31:00
my-server sshd[1235]: Failed password for invalid user guest from 203.0.113.5 port 22 ssh2

// Example from /var/log/syslog Jul 25 10:35:15 my-server systemd[1]: Started Session 123 of user
admin. Jul 25 10:40:20 my-server kernel: usb 1-1: new high-speed USB device number 3 using
xhci_hcd ```

🎯 What to Extract:
Authentication Failures: Repeated failed login attempts, especially from unknown IPs,
indicating brute-force attacks.

Successful Logins: Unexpected logins, logins from unusual locations, or logins by inactive
accounts.

Privilege Escalation: sudo attempts, especially failed ones or those by unauthorized users.

System Errors: Critical errors that might indicate system instability or compromise.

Evidence of Scans: Network scanning tools often trigger numerous connection attempts to
various ports, which can be logged by services like sshd or apache .

📍 Where It’s Located: These log files are typically found in the /var/log/ directory. The exact
filenames may vary slightly between Linux distributions.

🤖 Why It Matters (CISO Lens):


First Line of Defense: These logs are often the first place to detect suspicious activity, providing
immediate visibility into potential security incidents.
Forensic Goldmine: In the aftermath of a breach, these logs are invaluable for reconstructing
the attack timeline, identifying compromised accounts, and understanding the attacker's
methods.

Compliance and Audit Trails: Many regulatory frameworks require detailed logging of
authentication and system events, making these logs essential for compliance.

🧠 Strategic Usage Contexts:


Proactive Monitoring: Implement automated scripts to regularly parse these logs for
anomalies and alert security teams.

Incident Response: During an incident, rapidly analyze these logs to identify the initial
compromise vector and contain the threat.

Threat Intelligence: Use insights from failed attacks (e.g., attacker IP addresses, usernames) to
update firewalls, intrusion detection systems, and threat intelligence feeds.

Security Awareness Training: Use real-world examples from these logs to educate employees
on common attack techniques and the importance of strong passwords.

📘 Deep Teaching Section: While journalctl provides a modern, structured approach to logging,
the traditional /var/log/ files remain critical, especially in environments where systemd is not fully
adopted or for historical analysis. The ability to directly grep and awk these plain-text files makes
them highly versatile for quick, on-the-fly analysis. Understanding the common patterns of malicious
activity within these logs—such as repeated failed SSH attempts from a single source IP, or a sudden
surge of connection attempts to unusual ports—is fundamental for any blue team operation. These
logs are the raw data that feeds into more sophisticated SIEM solutions, and a CISO must ensure that
proper log retention and analysis policies are in place for these critical data sources.

🔧 Tool/Command Name: logrotate

🎯 What it is: logrotate is a utility designed to simplify the administration of log files on systems
that generate a large number of log files. It allows for automatic rotation, compression, removal, and
mailing of log files. This prevents log files from consuming excessive disk space and makes them
easier to manage and analyze.

💻 Terminal Usage (Syntax):


Force log rotation for all configured logs: sudo logrotate -f /etc/logrotate.conf

Run logrotate in debug mode (dry run): sudo logrotate -d /etc/logrotate.conf

Check status of logrotate : cat /var/lib/logrotate/status

📤 Output (Labeled Example): // Example of logrotate status file content logrotate


state -- version 2 "/var/log/syslog" "/var/log/mail.log" "/var/log/kern.log"
"/var/log/auth.log" 2025-7-25-10:0:0 "/var/log/daemon.log" "/var/log/dpkg.log"
"/var/log/alternatives.log" "/var/log/btmp" 2025-7-1-0:0:0 "/var/log/lastlog"
"/var/log/wtmp" 2025-7-1-0:0:0

🎯 What to Extract:
Rotation Schedule: Understand how frequently logs are rotated and how many old logs are
kept.

Compression Status: Verify that old logs are being compressed to save disk space.

Error Messages: Identify any issues with log rotation that might prevent logs from being
properly archived or deleted.

📍 Where It’s Located: The main configuration file for logrotate is typically
/etc/logrotate.conf . Individual application-specific configurations are often found in
/etc/logrotate.d/ . The status file is usually at /var/lib/logrotate/status .

🤖 Why It Matters (CISO Lens):


Disk Space Management: Prevents log files from filling up critical disk space, which could lead
to system instability or denial of service.

Log Retention Policy Enforcement: Ensures that logs are retained for the required period for
compliance and forensic purposes, and then securely deleted.

Performance: Smaller, rotated log files are easier and faster to process by SIEM systems and
forensic tools.

Security of Log Data: Proper configuration ensures that sensitive log data is not left
unmanaged for extended periods.

🧠 Strategic Usage Contexts:


Compliance: Configure logrotate to meet specific log retention requirements mandated by
regulations (e.g., GDPR, HIPAA, PCI DSS).

Capacity Planning: Integrate logrotate settings into overall capacity planning to ensure
sufficient storage for log data.

Incident Response Readiness: Ensure that historical log data is readily available and accessible
for post-incident analysis without overwhelming storage resources.

Automated Security Operations: Automate log management to reduce manual overhead and
ensure consistent application of log retention policies across the enterprise.

📘 Deep Teaching Section: logrotate is a seemingly mundane utility, but its proper configuration
is paramount for effective log management and, by extension, cybersecurity. Without logrotate ,
systems can quickly run out of disk space due to ever-growing log files, leading to system crashes or
denial of service. More critically from a CISO's perspective, logrotate ensures that log data is
available for the required retention period for compliance and forensic investigations. It also helps in
maintaining the integrity of the logging process by preventing logs from being truncated or lost due
to disk space issues. A well-configured logrotate setup is a foundational element of a robust
logging strategy, enabling efficient log analysis and ensuring that critical evidence is preserved.

🔧 Tool/Command Name: auditctl & ausearch (Linux Audit Framework)

🎯 What it is: The Linux Audit Framework provides a CAPP (Controlled Access Protection Profile)
compliant auditing system that can record security-relevant information based on pre-configured
rules. auditctl is the command-line utility used to control the kernel's audit system, allowing
administrators to add, delete, or list audit rules. ausearch is used to query the audit daemon logs for
specific events.

💻 Terminal Usage (Syntax):


List all current audit rules: sudo auditctl -l

Add a rule to audit all write access to /etc/passwd : sudo auditctl -w /etc/passwd -p wa
-k passwd_changes

Add a rule to audit all failed system calls by a specific user: sudo auditctl -a always,exit
-F arch=b64 -S open -F exit=-EACCES -F auid=1000 -k failed_access

Search for events related to a specific key: sudo ausearch -k passwd_changes

Search for failed authentication attempts: sudo ausearch -m USER_AUTH -sv no

Search for specific syscalls: sudo ausearch -sc execve

📤 Output (Labeled Example): ``` // Example output from auditctl -l -w /etc/passwd -p wa -k


passwd_changes -a always,exit -F arch=b64 -S open -F exit=-EACCES -F auid=1000 -k failed_access

// Example output from ausearch -k passwd_changes

type=PATH msg=audit(1678886400.000:123): item=0 name="/etc/passwd" inode=12345 dev=fd:00


mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_inh=0 cap_p=0
cap_pe=0 cap_eff=0 type=CWD msg=audit(1678886400.000:123): cwd="/home/user" type=SYSCALL
msg=audit(1678886400.000:123): arch=c000003e syscall=2 success=yes exit=3 a0=7ffc00000000 a1=1
a2=1b6 a3=0 items=1 ppid=1234 pid=5678 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000
fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="vim" exe="/usr/bin/vim"
subj=unconfined key="passwd_changes" ```

🎯 What to Extract:
File System Changes: Modifications to critical system files (e.g., /etc/passwd , /etc/shadow ,
configuration files).

Process Execution: Execution of suspicious binaries or scripts.

User Activity: Detailed records of user actions, including command execution, file access, and
privilege escalation attempts.

System Call Failures: Attempts to perform unauthorized operations (e.g., accessing restricted
files, failed system calls).

Evidence of Intrusion: Any activity that deviates from normal behavior, such as attempts to
disable auditing or unusual file access patterns.

📍 Where It’s Located: Audit logs are typically stored in /var/log/audit/audit.log . The auditd
daemon is responsible for collecting and writing these logs.

🤖 Why It Matters (CISO Lens):


Granular Visibility: The Linux Audit Framework provides extremely granular control over what
events are logged, allowing CISOs to tailor auditing to specific security requirements and
compliance mandates.

Tamper Detection: Audit logs are designed to be highly resistant to tampering, providing a
reliable record of system activity even in the event of a compromise.

Attribution: Detailed audit records can help attribute malicious activity to specific users or
processes, which is crucial for incident response and legal proceedings.

Proactive Threat Detection: By monitoring for specific system calls or file access patterns, the
audit system can provide early warnings of potential attacks.

🧠 Strategic Usage Contexts:


Critical System Monitoring: Implement robust audit rules on critical servers (e.g., domain
controllers, database servers) to monitor for unauthorized access or configuration changes.

Insider Threat Detection: Use audit rules to track sensitive file access or command execution
by privileged users.

Compliance and Regulatory Audits: Leverage the audit framework to generate detailed
reports for compliance with standards like PCI DSS, HIPAA, or NIST.

Incident Response: During an incident, use ausearch to quickly identify the scope of
compromise, the methods used by attackers, and any data exfiltration attempts.

Security Hardening: Use audit logs to identify misconfigurations or vulnerabilities that


attackers might exploit, and then implement appropriate hardening measures.

📘 Deep Teaching Section: The Linux Audit Framework operates at the kernel level, making it a
powerful and difficult-to-evade logging mechanism. Unlike application-level logs, audit records
capture events before they are processed by user-space applications, providing a more complete and
trustworthy chain of custody. The key to effective use of the audit system lies in defining precise audit
rules using auditctl . Overly broad rules can generate an overwhelming volume of logs, while overly
narrow rules might miss critical events. ausearch is the indispensable tool for navigating these rich
logs, allowing security teams to quickly pinpoint relevant events. For a CISO, understanding the
capabilities of the Linux Audit Framework is essential for establishing a robust security posture,
ensuring accountability, and providing irrefutable evidence in the face of a security incident.

🔧 Tool/Command Name: auditd config basics

🎯 What it is: auditd is the userspace daemon that collects audit events generated by the Linux
kernel. It is the central component of the Linux Audit Framework, responsible for writing audit
records to disk (typically /var/log/audit/audit.log ) and providing real-time alerts. Configuring
auditd involves setting up rules that dictate which system calls, file accesses, and other events
should be monitored and logged.

💻 Terminal Usage (Syntax):


Main configuration file: /etc/audit/auditd.conf
Rule definition files: /etc/audit/rules.d/*.rules

Restart auditd after config changes: sudo systemctl restart auditd

Reload auditd rules without restart: sudo auditctl -R

Example auditd.conf snippets: ```ini

The number of days to keep logs. 0 means


forever.
num_logs = 5

Action to take when disk is full (e.g., IGNORE,


SUSPEND, HALT)
disk_full_action = SUSPEND

Action to take when disk is almost full


disk_error_action = SYSLOG

Log format (RAW or ENRICHED)


log_format = RAW ```

Example rule in /etc/audit/rules.d/audit.rules : ```

Audit all write access to /etc/shadow


-w /etc/shadow -p wa -k shadow_file_change
Audit attempts to change user/group
information
-a always,exit -F arch=b64 -S setuid -S setgid -S setresuid -S setresgid -k user_id_change

Immutable mode: prevent rules from being


loaded or deleted (requires reboot to clear)
-e 2 ```

📤 Output (Labeled Example):


Configuration changes are applied internally by the auditd daemon. There is no direct output
from applying configuration files, but you can verify rules with auditctl -l and check auditd
status with sudo systemctl status auditd .

🎯 What to Extract:
Log Retention Policies: How long audit logs are kept and how many log files are rotated.

Disk Space Management: Actions taken when disk space is low, which can impact log integrity.

Rule Set: The specific audit rules defined, indicating what events are being monitored.

Immutable Mode: Whether the audit configuration is set to immutable, preventing runtime
changes and ensuring audit integrity.

📍 Where It’s Located: The primary configuration file is /etc/audit/auditd.conf . Audit rules are
typically defined in files under /etc/audit/rules.d/ (e.g., audit.rules or custom .rules files).

🤖 Why It Matters (CISO Lens):


Policy Enforcement: auditd configuration directly implements the organization's security and
compliance auditing policies.

Log Integrity: Proper configuration ensures that audit logs are collected reliably, protected
from tampering, and retained for the required duration.

Resilience: Defines how the system behaves under adverse conditions (e.g., full disk), ensuring
that critical audit data is not lost.

Security Posture: A well-configured auditd is a cornerstone of a strong security posture,


providing deep visibility into system-level events that other logging mechanisms might miss.

🧠 Strategic Usage Contexts:


Compliance Audits: Demonstrate adherence to regulatory requirements by showing
comprehensive audit logging of critical system activities.
Forensic Readiness: Ensure that auditd is configured to capture the necessary events for
detailed forensic investigations, including immutable mode for critical systems.

Threat Detection Engineering: Develop and deploy custom auditd rules to detect specific
attack techniques or suspicious behaviors identified through threat intelligence.

System Hardening: Integrate auditd configuration into standard server build processes to
ensure consistent and secure auditing across the infrastructure.

📘 Deep Teaching Section: The auditd daemon is the workhorse of the Linux Audit Framework.
While auditctl is used for runtime management of rules, the persistent configuration is managed
through auditd.conf and the rule files in rules.d . For a CISO, understanding these configuration
files is crucial because they dictate the scope and behavior of the entire audit system. Key
considerations include num_logs (how many rotated logs to keep), disk_full_action (what to do if
the disk fills up – HALT is most secure but can cause DoS, SUSPEND is a common compromise), and
log_format (RAW is typically preferred for SIEM integration as it contains all data). The -e 2 rule,
which sets the audit system to immutable mode, is particularly important for high-security
environments, as it prevents anyone, even root, from modifying or disabling audit rules without a
system reboot. This significantly enhances the integrity and trustworthiness of audit logs, making
them a more reliable source of evidence during a security incident.

🛡 2. Process, User, and Session Monitoring

🔧 Tool/Command Name: ps aux , top , htop

🎯 What it is: These commands are used to display information about running processes on a Linux
system. They provide insights into what programs are executing, who owns them, how much system
resources they are consuming, and their current status.

ps aux : (Process Status) Provides a snapshot of all running processes.

top : (Table of Processes) Provides a dynamic, real-time view of running processes, sorted by
CPU usage by default.

htop : An interactive, enhanced version of top that offers a more user-friendly interface, better
process management capabilities, and visual meters for CPU, memory, and swap usage.

💻 Terminal Usage (Syntax):


Show all processes: ps aux

Show processes owned by a specific user: ps -u username

Show processes by name: ps aux | grep process_name

Start top : top (press q to quit)

Start htop : htop (press F10 or q to quit)

Sort top by memory usage: While in top , press M

Kill a process in htop : Select process, press F9


📤 Output (Labeled Example): ``` // Example from ps aux USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND root 1 0.0 0.0 168000 9000 ? Ss Jul22 0:05 /sbin/init admin 1234 0.1 0.5
123456 54321 pts/0 S+ 10:00 0:01 /usr/bin/python3 /home/admin/malicious_script.py www-data
5678 0.0 0.2 98765 12345 ? S Jul22 0:02 /usr/sbin/apache2 -k start

// Example from top (abbreviated) Tasks: 150 total, 1 running, 149 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 7987.8 total, 7000.0 free,
500.0 used, 487.8 buff/cache MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 7200.0 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1234 admin 20 0 123456 54321 10000
S 0.1 0.5 0:01.23 python3 ```

🎯 What to Extract:
Suspicious Processes: Processes running under unexpected users (e.g., root for a web server),
with unusual names, or from unusual directories.

Resource Hogs: Processes consuming excessive CPU or memory, which could indicate malware
(e.g., cryptocurrency miners) or a denial-of-service attack.

Network Connections: (Indirectly) Processes that are listening on unusual ports or making
outbound connections to suspicious IP addresses (further investigation with netstat / ss is
needed).

Parent-Child Relationships: (With pstree or htop tree view) Identify processes spawned by
other suspicious processes.

📍 Where It’s Located: This information is dynamically retrieved from the kernel and /proc
filesystem. There are no static log files for these commands.

🤖 Why It Matters (CISO Lens):


Real-time Threat Detection: Provides immediate visibility into active threats, such as malware
execution, unauthorized processes, or resource exhaustion attacks.

Incident Response: Crucial for quickly identifying and terminating malicious processes during
an active incident.

System Health Monitoring: Helps in understanding system performance and identifying


potential issues before they escalate into security problems.

Insider Threat Detection: Can reveal unauthorized software being run by employees or
contractors.

🧠 Strategic Usage Contexts:


Active Incident Response: Use top / htop to quickly identify and kill malicious processes
during an ongoing attack.

Compromise Assessment: After a suspected breach, use ps aux to list all running processes
and cross-reference them against a baseline of known good processes.

Performance Monitoring: Regularly review top / htop output for unusual resource
consumption that might indicate a hidden cryptocurrency miner or a DDoS attack.
Automated Anomaly Detection: Integrate process monitoring into security scripts that alert on
new, unknown processes or processes consuming abnormal resources.

📘 Deep Teaching Section: Process monitoring is a fundamental aspect of defensive operations.


While ps aux provides a static snapshot, top and htop offer dynamic, interactive views that are
invaluable for real-time analysis. For a CISO, understanding the normal process baseline for critical
systems is paramount. Any deviation—a process running as an unexpected user, a process with an
unusual name, or one consuming excessive resources—should trigger an immediate investigation.
Attackers often try to hide their processes by naming them similarly to legitimate system processes
or by running them with minimal resource consumption. Therefore, a keen eye for anomalies,
combined with a deep understanding of the system's normal behavior, is essential. These tools are
the first line of defense in identifying active compromises and are often used in conjunction with
network monitoring tools to get a complete picture of suspicious activity.

🔧 Tool/Command Name: who , w , last , lastlog , uptime

🎯 What it is: These commands provide information about users currently logged into the system,
their activities, and historical login data, as well as system uptime.

who : Shows who is logged on.

w : Shows who is logged on and what they are doing.

last : Shows a listing of last logged in users.

lastlog : Reports the most recent login of all users or a specified user.

uptime : Tells how long the system has been running, and the number of users.

💻 Terminal Usage (Syntax):


List current logged-in users: who

List current logged-in users and their processes: w

Show all past logins: last

Show last login for all users: lastlog

Show last login for a specific user: lastlog -u username

Show system uptime: uptime

📤 Output (Labeled Example): ``` // Example from who admin pts/0 2025-07-25 10:00
(192.168.1.100) analyst pts/1 2025-07-25 10:15 (10.0.0.5)

// Example from w 10:55:00 up 3 days, 20:00, 2 users, load average: 0.00, 0.01, 0.05 USER TTY FROM
LOGIN@ IDLE JCPU PCPU WHAT admin pts/0 192.168.1.100 10:00 5min 0.01s 0.01s w analyst pts/1
10.0.0.5 10:15 10s 0.02s 0.01s bash

// Example from last (abbreviated) admin pts/0 192.168.1.100 Fri Jul 25 10:00 still logged in analyst
pts/1 10.0.0.5 Fri Jul 25 10:15 - 10:50 (00:35) reboot system boot 5.15.0-105-gener Fri Jul 22 08:00 -
10:55 (3+02:55)
// Example from lastlog (abbreviated) Username Port From Latest root pts/0 192.168.1.100 Fri Jul 25
10:00:00 +0000 2025 admin pts/0 192.168.1.100 Fri Jul 25 10:00:00 +0000 2025 guest Never logged in

// Example from uptime 10:55:00 up 3 days, 20:00, 2 users, load average: 0.00, 0.01, 0.05 ```

🎯 What to Extract:
Unauthorized Logins: Users logged in who shouldn't be, or logins from unexpected IP
addresses.

Unusual Activity: Users logged in at odd hours or performing suspicious commands ( w


output).

Account Compromise: Multiple failed login attempts for a user, followed by a successful login
from a new IP ( lastlog ).

System Restarts: Unexpected reboots ( last output with reboot entries) which could indicate
system instability or malicious activity.

Long Uptime: While generally good, extremely long uptimes without patching can indicate
unpatched vulnerabilities. Short uptimes can indicate instability or forced reboots.

📍 Where It’s Located: who and w read from /var/run/utmp (or /run/utmp ). last reads from
/var/log/wtmp . lastlog reads from /var/log/lastlog . uptime reads from /proc/uptime and
/var/run/utmp .

🤖 Why It Matters (CISO Lens):


User Behavior Analytics: These tools provide critical data for understanding user login
patterns and identifying anomalies that might indicate account compromise or insider threats.

Session Monitoring: w allows for real-time monitoring of what users are doing, which is vital
during an active incident or for compliance auditing.

Accountability: Historical login data ( last , lastlog ) provides an audit trail for user access,
crucial for forensic investigations and compliance.

System Stability: uptime helps in quickly assessing system availability and identifying
unexpected reboots.

🧠 Strategic Usage Contexts:


Incident Response: Quickly identify currently logged-in attackers or compromised accounts
using who and w . Use last and lastlog to trace the initial access vector.

Proactive Monitoring: Regularly review last and lastlog for unusual login patterns (e.g.,
logins from new geographic locations, logins outside business hours, or logins by dormant
accounts).

Compliance Auditing: Generate reports on user access and session activity to demonstrate
compliance with access control policies.

Threat Hunting: Look for signs of persistent access, such as a user account logging in
repeatedly from different IPs or at unusual times.
📘 Deep Teaching Section: Monitoring user and session activity is a cornerstone of defensive
security. While auth.log provides detailed authentication events, who , w , last , and lastlog
offer a higher-level view of user presence and historical access. The w command is particularly
powerful as it not only shows who is logged in but also what commands they are currently executing,
providing immediate context during an investigation. For a CISO, the ability to quickly ascertain who
is on a system, where they logged in from, and what they are doing is invaluable for both real-time
threat detection and post-incident analysis. These tools, when combined with other logging
mechanisms, paint a comprehensive picture of user activity, enabling the detection of unauthorized
access, account hijacking, and insider threats. Ensuring the integrity of the wtmp and lastlog files is
also critical, as attackers often attempt to clear these records to cover their tracks.

🔧 Tool/Command Name: id , groups , whoami

🎯 What it is: These commands are fundamental for understanding user identity and privileges
within a Linux system. They are crucial for verifying user context, especially during security
investigations or when assessing potential privilege escalation.

id : Displays user and group information for the current user or a specified user.

groups : Shows the groups that a user is a member of.

whoami : Prints the effective username of the current user.

💻 Terminal Usage (Syntax):


Display current user's ID and groups: id

Display ID and groups for a specific user: id username

Show current user's groups: groups

Show groups for a specific user: groups username

Print effective username: whoami

📤 Output (Labeled Example): ``` // Example from id uid=1000(admin) gid=1000(admin)


groups=1000(admin),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),116(lpadmin),126(sambashare)

// Example from groups admin adm cdrom sudo dip plugdev lpadmin sambashare

// Example from whoami admin ```

🎯 What to Extract:
User ID (UID) and Group ID (GID): Verify the numerical identifiers for the user and their primary
group.

Group Memberships: Identify all groups a user belongs to, especially privileged groups like
sudo , adm , or root .

Effective User: Confirm the user context under which a command is being executed,
particularly after sudo or su .
Privilege Escalation Indicators: Look for users who are members of unexpected privileged
groups, or processes running under a whoami context that doesn't match the expected user.

📍 Where It’s Located: This information is dynamically retrieved from the system's user and group
databases (e.g., /etc/passwd , /etc/group , /etc/shadow ).

🤖 Why It Matters (CISO Lens):


Access Control Verification: Essential for verifying that users have only the necessary
permissions (least privilege principle).

Privilege Escalation Detection: Helps identify if an attacker has successfully gained higher
privileges by checking the effective user and group memberships.

Accountability: Confirms the identity of the user performing actions, which is critical for audit
trails and incident response.

Configuration Auditing: Used to audit user and group configurations to ensure they align with
security policies.

🧠 Strategic Usage Contexts:


Post-Compromise Analysis: After a system compromise, use id and groups to determine the
level of access an attacker has gained and which accounts might be compromised.

Privilege Audit: Regularly audit user group memberships, especially for critical systems, to
ensure no unauthorized users are part of privileged groups.

Script and Application Security: When reviewing custom scripts or applications, use whoami
to ensure they are running with the intended and minimal necessary privileges.

Insider Threat Investigation: If suspicious activity is detected, use these commands to confirm
the identity and privileges of the user involved.

📘 Deep Teaching Section: Understanding user identity and group memberships is foundational to
Linux security. The id command provides a comprehensive overview, including the real and
effective UIDs and GIDs, and all supplementary groups. The groups command is a simpler way to list
group memberships. whoami is particularly useful in scripts or when dealing with sudo or su to
confirm the actual user context. For a CISO, the principle of least privilege is paramount, and these
commands are the primary tools for verifying its implementation. Attackers frequently aim for
privilege escalation, often by exploiting vulnerabilities to gain membership in privileged groups (like
sudo ) or to run processes as root . Therefore, regularly auditing user group memberships and being
able to quickly check a user's effective privileges are critical defensive measures. Any unexpected
change in a user's id or groups output should be a red flag, triggering an immediate investigation
into potential unauthorized access or privilege escalation attempts.

🔧 Tool/Command Name: passwd -S , faillog , chage

🎯 What it is: These commands are used to inspect and manage user password and account aging
information. They are critical for auditing account security, identifying potential compromises, and
enforcing password policies.
passwd -S : Displays the status of a user's password, including whether it's locked, encrypted,
or if it has no password.

faillog : Displays the contents of the faillog records, which track failed login attempts for
users.

chage : Changes user password expiry information. It can be used to view or modify password
aging parameters.

💻 Terminal Usage (Syntax):


Show password status for a user: passwd -S username

Show failed login attempts for all users: faillog

Show failed login attempts for a specific user: faillog -u username

View password aging information for a user: chage -l username

Set password to expire in 90 days: chage -M 90 username

Disable password expiry: chage -M -1 username

📤 Output (Labeled Example): ``` // Example from passwd -S admin PS 2025-07-01 0 99999 7 -1
(Password set, MD5 crypt.) guest L 2025-07-01 0 99999 7 -1 (Password locked.)

// Example from faillog Login Failures Maximum Latest admin 0 0 07/25/25 10:00:00 +0000 guest 5 0
07/25/25 10:05:00 +0000 root 0 0 07/25/25 10:10:00 +0000

// Example from chage -l Minimum number of days between password change: 0 Maximum number
of days between password change: 99999 Number of days of warning before password expires: 7
Password last changed: Jul 25, 2025 Password expires: Never Password inactive: Never Account
expires: Never ```

🎯 What to Extract:
Password Status: Identify accounts with no password, locked accounts, or accounts with
expired passwords.

Failed Login Attempts: Detect brute-force attacks or attempts to guess passwords by observing
a high number of failures for a specific user.

Password Aging Policy: Verify that password expiry policies are being enforced and that critical
accounts have strong aging requirements.

Account Expiry: Check if accounts are configured to expire, especially for temporary or
contractor accounts.

📍 Where It’s Located: passwd -S reads from /etc/shadow . faillog reads from
/var/log/faillog . chage modifies entries in /etc/shadow .

🤖 Why It Matters (CISO Lens):


Account Security: Directly impacts the strength of user authentication and helps prevent
unauthorized access through weak or compromised passwords.
Compliance: Many regulatory frameworks mandate strict password policies and account
lockout mechanisms, which these tools help enforce and audit.

Early Warning System: A sudden increase in failed login attempts can be an early indicator of a
targeted attack or credential stuffing campaign.

Insider Threat Mitigation: Helps identify dormant accounts or accounts with lax password
policies that could be exploited by insiders.

🧠 Strategic Usage Contexts:


Regular Security Audits: Periodically audit password statuses and aging policies for all users,
especially privileged accounts, to ensure compliance and security best practices.

Incident Response: If an account compromise is suspected, use faillog to investigate the


history of failed login attempts and passwd -S to check the current password status.

Automated Account Management: Integrate chage into automated scripts for setting
password expiry policies for new users or temporary accounts.

Threat Hunting: Proactively search for accounts with a high number of failed login attempts
that might not have triggered an alert in the SIEM.

📘 Deep Teaching Section: Password and account aging management are critical components of an
organization's identity and access management (IAM) strategy. While strong passwords are a first line
of defense, their effectiveness diminishes over time, making password aging policies essential.
faillog provides a crucial historical record of failed login attempts, which can be a goldmine for
detecting brute-force attacks or credential stuffing. For a CISO, ensuring that these tools are regularly
used for auditing and that appropriate policies are enforced is paramount. An account with a high
number of failed logins, especially if followed by a successful login from an unusual IP, is a strong
indicator of compromise. Similarly, accounts with no password expiry or locked accounts that are still
active pose significant security risks. These commands, when used in conjunction with other logging
and monitoring tools, provide a comprehensive view of account security and help in proactive threat
detection and incident response.

🔧 Tool/Command Name: netstat , ss , lsof -i

🎯 What it is: These commands are used to inspect network connections, routing tables, interface
statistics, and open files associated with network activity. They are essential for understanding
network communication on a system, identifying suspicious connections, and troubleshooting
network issues.

netstat : (Network Statistics) A versatile command-line tool for displaying network


connections, routing tables, interface statistics, masquerade connections, and multicast
memberships.

ss : (Socket Statistics) A newer, faster, and more efficient tool than netstat for displaying
socket statistics. It can show more information than netstat and is preferred on modern Linux
systems.
lsof -i : (List Open Files - Internet) Lists all open files and the processes that opened them,
specifically focusing on network connections (sockets).

💻 Terminal Usage (Syntax):


Show all listening ports and established connections (numeric): netstat -tulnp (TCP, UDP,
listening, numeric, programs)

Show all listening ports and established connections (numeric): ss -tulnp

Show all open network connections and the processes using them: lsof -i

Show connections to a specific port: netstat -anp | grep :80

Show connections to a specific IP: ss -tnp | grep 192.168.1.100

Show processes listening on a specific port: lsof -i :22

📤 Output (Labeled Example): ``` // Example from netstat -tulnp Proto Recv-Q Send-Q Local
Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN 1234/sshd tcp 0 0
127.0.0.1:631 0.0.0.0: LISTEN 5678/cupsd tcp 0 0 192.168.1.50:22 192.168.1.100:54321 ESTABLISHED
1234/sshd udp 0 0 0.0.0.0:68 0.0.0.0:* 9012/dhclient

// Example from ss -tulnp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port tcp LISTEN
0 128 0.0.0.0:22 0.0.0.0: tcp LISTEN 0 128 127.0.0.1:631 0.0.0.0: tcp ESTAB 0 0 192.168.1.50:22
192.168.1.100:54321 udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:*

// Example from lsof -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sshd 1234 root 3u
IPv4 12345 0t0 TCP :ssh (LISTEN) sshd 1234 root 4u IPv6 12346 0t0 TCP :ssh (LISTEN) sshd 1234 root
5u IPv4 67890 0t0 TCP 192.168.1.50:ssh->192.168.1.100:54321 (ESTABLISHED) python3 9876 admin 3u
IPv4 54321 0t0 TCP 192.168.1.50:44444->evil.com:8080 (ESTABLISHED) ```

🎯 What to Extract:
Unexpected Listening Ports: Services listening on ports that should not be open, indicating
potential backdoors or misconfigurations.

Suspicious Established Connections: Outbound connections to unknown or malicious IP


addresses, or inbound connections from suspicious sources.

Unusual Protocols: Use of unexpected network protocols.

Associated Processes: Identify the process (PID and program name) responsible for a
suspicious network connection, which is crucial for further investigation.

Local vs. Remote Connections: Differentiate between internal and external connections to
understand the scope of network activity.

📍 Where It’s Located: This information is dynamically retrieved from the kernel and /proc
filesystem. There are no static log files for these commands.

🤖 Why It Matters (CISO Lens):


Network Visibility: Provides immediate insight into the network footprint of a system,
revealing open ports and active connections.
Malware Detection: Essential for detecting command-and-control (C2) communication, data
exfiltration, or unauthorized services running on compromised hosts.

Attack Surface Reduction: Helps identify unnecessary open ports that increase the attack
surface.

Incident Response: Crucial for quickly identifying active network connections associated with
an ongoing attack and blocking them.

🧠 Strategic Usage Contexts:


Compromise Assessment: After a suspected breach, use these commands to identify any
active C2 channels or data exfiltration attempts.

Regular Security Audits: Periodically audit network connections on critical servers to ensure
only authorized services are listening and communicating.

Firewall Rule Validation: Verify that firewall rules are effectively blocking unauthorized
inbound and outbound connections.

Threat Hunting: Look for unusual network connections (e.g., connections to known bad IPs,
connections on non-standard ports) that might indicate a hidden threat.

Application Security: Ensure that applications are only listening on expected interfaces and
ports, and are not making unauthorized outbound connections.

📘 Deep Teaching Section: Network monitoring at the host level is a critical component of a layered
security strategy. While perimeter firewalls and network intrusion detection systems provide macro-
level visibility, netstat , ss , and lsof -i offer granular insight into what's happening on individual
systems. Attackers often establish persistent backdoors or C2 channels, and these tools are
invaluable for uncovering them. ss is generally preferred over netstat on modern systems due to
its speed and efficiency, especially on systems with many connections. lsof -i is particularly
powerful because it directly links network connections to the processes that initiated them,
providing immediate context for investigation. For a CISO, understanding the normal network
communication patterns of critical systems is essential. Any deviation—an unexpected listening port,
an outbound connection to an unknown IP, or a process making connections it shouldn't—should
trigger an immediate security alert and investigation. These tools are often the first step in identifying
active network-based threats that have bypassed other security controls.

🔧 Tool/Command Name: grep , awk , cut (for parsing logs)

🎯 What it is: These are powerful command-line utilities used for text processing and data extraction,
indispensable for analyzing plain-text log files. They allow security professionals to filter, format, and
extract specific pieces of information from large volumes of log data.

grep : (Global Regular Expression Print) Searches for lines that match a specified pattern in one
or more files.

awk : A powerful pattern-scanning and processing language. It is excellent for extracting and
manipulating columns of data.
cut : Removes sections from each line of files. It can extract bytes, characters, or fields from
lines.

💻 Terminal Usage (Syntax):


Find all lines containing "Failed password" in auth.log: grep "Failed password"
/var/log/auth.log

Find lines and show 3 lines before and after: grep -C 3 "Failed password"
/var/log/auth.log

Count occurrences of a pattern: grep -c "Accepted password" /var/log/auth.log

Extract the 11th field (IP address) from accepted SSH logins: grep "Accepted password"
/var/log/auth.log | awk \'{print $11}\'

Extract username and IP from failed SSH attempts: grep "Failed password"
/var/log/auth.log | awk \'{print 9,11}\'

Extract the first 10 characters of each line: cat /var/log/syslog | cut -c 1-10

Extract fields separated by a delimiter (e.g., colon for /etc/passwd): cut -d: -f1,7
/etc/passwd (username and shell)

📤 Output (Labeled Example): ``` // Example from grep "Failed password" /var/log/auth.log Jul 25
10:31:00 my-server sshd[1235]: Failed password for invalid user guest from 203.0.113.5 port 22 ssh2
Jul 25 10:31:05 my-server sshd[1236]: Failed password for admin from 203.0.113.10 port 22 ssh2

// Example from grep "Accepted password" /var/log/auth.log | awk \'{print $11}\' 192.168.1.100
10.0.0.5

// Example from cut -d: -f1,7 /etc/passwd root:/bin/bash daemon:/usr/sbin/nologin admin:/bin/bash


```

🎯 What to Extract:
Specific Events: Filter for keywords like "failed", "accepted", "error", "attack", "malicious" to
pinpoint relevant security events.

Source IPs: Extract IP addresses associated with suspicious activities (e.g., failed logins,
unusual connections).

Usernames: Identify usernames involved in security incidents.

Timestamps: Extract timestamps for correlating events across different log files.

Command Arguments: For audit logs, extract specific command arguments used by processes.

📍 Where It’s Located: These commands operate on any plain-text file, most commonly log files
found in /var/log/ .

🤖 Why It Matters (CISO Lens):


Ad-hoc Analysis: Enables rapid, on-the-fly analysis of log data without requiring complex SIEM
queries.
Incident Triage: Quickly identify key indicators of compromise (IOCs) from raw logs during the
initial stages of an incident.

Data Preparation: Essential for pre-processing log data before ingesting it into SIEM systems or
other analytical tools.

Cost-Effective: Provides powerful log analysis capabilities using built-in Linux tools, reducing
reliance on expensive commercial solutions for basic tasks.

🧠 Strategic Usage Contexts:


Incident Response: During an active incident, use these tools to quickly search for attacker
activity, identify compromised accounts, and trace lateral movement.

Threat Hunting: Proactively search for subtle indicators of compromise that might not trigger
automated alerts.

Log Review: Periodically review critical log files for anomalies or suspicious patterns that could
indicate a security breach.

SIEM Integration: Use these tools to understand the structure of raw logs and develop parsers
for ingesting data into SIEM platforms.

Custom Alerting: Create simple scripts that use grep to monitor for specific patterns and
trigger alerts.

📘 Deep Teaching Section: grep , awk , and cut are the Swiss Army knives of Linux log analysis.
While modern SIEMs offer sophisticated parsing and querying capabilities, these fundamental
command-line tools remain indispensable for quick investigations, especially when direct access to
the system's logs is required. grep is excellent for filtering lines based on patterns, supporting
regular expressions for complex searches. awk excels at processing structured text, allowing you to
treat logs as a database and extract specific fields. cut is simpler, ideal for extracting fixed-width
columns or fields separated by a single delimiter. For a CISO, understanding how to leverage these
tools empowers their security team to perform rapid triage and deep-dive analysis, even in
environments without a fully deployed SIEM. They are also crucial for validating the output of
automated systems and for understanding the raw data before it is normalized and enriched by other
security tools. Mastery of these commands is a hallmark of a proficient blue team analyst.

🔎 3. File Integrity & Sysmon Equivalents


🔧 Tool/Command Name: tripwire or aide (file integrity monitoring)

🎯 What it is: File Integrity Monitoring (FIM) tools like Tripwire and AIDE (Advanced Intrusion
Detection Environment) are essential for detecting unauthorized changes to critical system files and
directories. They work by creating a baseline database of cryptographic hashes (fingerprints) of files.
Periodically, they re-calculate the hashes and compare them against the baseline. Any discrepancies
indicate a change, which could be a sign of tampering, malware infection, or misconfiguration.

AIDE: A free, open-source FIM tool that is highly configurable and supports various hashing
algorithms.
Tripwire: A commercial FIM solution, historically one of the most well-known, offering
enterprise-grade features.

💻 Terminal Usage (Syntax):


AIDE: Initialize database (first run): sudo aide --init (This creates aide.db.new )

AIDE: Move new database to active: sudo mv /var/lib/aide/aide.db.new


/var/lib/aide/aide.db

AIDE: Check for changes: sudo aide --check

AIDE: Update database after legitimate changes: sudo aide --update (This creates
aide.db.new , then move it to aide.db )

AIDE: Example configuration snippet ( /etc/aide/aide.conf ): ``` # Define rules for different
file types NORMAL = R+p+i+n+u+g+s+b+acl+xattrs+selinux DIR = p+i+n+u+g+acl+xattrs+selinux

Rules for critical system files


/boot NORMAL /etc NORMAL /bin NORMAL /sbin NORMAL /lib NORMAL /lib64 NORMAL /usr/bin
NORMAL /usr/sbin NORMAL

Ignore temporary files


!/var/log !/var/tmp !/tmp ```

📤 Output (Labeled Example): ``` // Example from aide --check (showing a detected change) AIDE
found differences between database and filesystem. Looks okay: 12345 Added: 0 Removed: 0
Changed: 1

Changed files:

f = /etc/passwd

Detailed information about changes:

File: /etc/passwd Perm: 0644, 0640 Size: 1234, 1238 Mtime: 2025-07-20 10:00:00, 2025-07-25 11:00:00
Ctime: 2025-07-20 10:00:00, 2025-07-25 11:00:00 MD5: 123abc..., 456def... SHA256: 789ghi..., 012jkl...
```

🎯 What to Extract:
Changed Files: Identify which files have been modified, added, or removed.

Type of Change: Determine if permissions, ownership, size, or content (hash) have changed.

Timestamps: Note the modification and change times to correlate with other events.

Hash Mismatches: The most critical indicator of content alteration.

📍 Where It’s Located: AIDE's database is typically stored at /var/lib/aide/aide.db .


Configuration files are usually in /etc/aide/aide.conf .

🤖 Why It Matters (CISO Lens):


Tamper Detection: Provides a robust mechanism to detect unauthorized modifications to
critical system files, which is a common tactic for attackers to establish persistence or hide their
presence.

Malware Detection: Can detect the presence of new or modified malware binaries on the
system.

Configuration Drift: Helps identify unintended changes to system configurations that could
introduce vulnerabilities.

Compliance: Many regulatory standards (e.g., PCI DSS, HIPAA) mandate FIM for critical systems
to ensure data integrity and security.

Early Warning System: FIM can provide an early warning of a breach, often before other
security controls are triggered.

🧠 Strategic Usage Contexts:


Critical Asset Protection: Deploy FIM on all critical servers (e.g., web servers, database servers,
domain controllers) to monitor their integrity.

Automated Scanning: Schedule daily or hourly FIM checks via cron jobs and integrate alerts
into the SIEM.

Change Management: Integrate FIM into the change management process. Legitimate changes
should be documented and the FIM database updated accordingly.

Incident Response: If a system is suspected of compromise, an immediate FIM check can


quickly identify altered files, guiding forensic efforts.

Supply Chain Security: Verify the integrity of software installations and updates from third-
party vendors.

📘 Deep Teaching Section: File Integrity Monitoring is a cornerstone of a robust defensive security
strategy, acting as a critical control against advanced persistent threats (APTs) and insider threats.
Attackers, once they gain access, often modify system binaries, configuration files, or inject malicious
code to maintain persistence or elevate privileges. FIM tools create an cryptographic fingerprint of
the system's critical files, making it virtually impossible for an attacker to alter them without
detection. For a CISO, the key is not just deploying FIM, but also managing the alerts effectively. A
high volume of false positives (legitimate changes triggering alerts) can lead to alert fatigue.
Therefore, careful configuration of what to monitor and what to ignore (e.g., temporary files, log files
that are managed by logrotate ) is crucial. Integrating FIM alerts into a SIEM system allows for
correlation with other security events, providing a more comprehensive view of potential incidents.
The immutable mode of the Linux Audit Framework (discussed earlier) can further enhance the
security of the FIM database itself, making it harder for an attacker to disable or tamper with the FIM
system.

🔧 Tool/Command Name: inotifywait , auditctl rules for monitoring /etc , /var ,


/home

🎯 What it is: These tools provide real-time or near real-time monitoring of file system events,
allowing security teams to detect changes to critical directories and files as they happen. This is a
more immediate form of file integrity monitoring compared to periodic FIM scans.

inotifywait : A command-line tool that waits for changes to files or directories using the Linux
inotify API. It can monitor for events like access, modification, attribute changes, creation,
deletion, and moving of files.

auditctl rules: As discussed previously, the Linux Audit Framework can be configured with
rules to monitor file system access and modifications at a very granular level, providing detailed
logs of who, what, and when changes occurred.

💻 Terminal Usage (Syntax):


Monitor /etc/passwd for any changes: inotifywait -m /etc/passwd

Monitor a directory for create, delete, modify events: inotifywait -m -e


create,delete,modify /var/www/html

Audit rule to monitor all write access to /etc : sudo auditctl -w /etc/ -p wa -k
etc_changes

Audit rule to monitor all write access to /var/www : sudo auditctl -w /var/www/ -p wa -k
web_root_changes

Audit rule to monitor all write access to user home directories: sudo auditctl -w /home/ -
p wa -k home_dir_changes

📤 Output (Labeled Example): ``` // Example from inotifywait -m /etc/passwd Setting up watches.
Watches established. /etc/passwd MODIFY

// Example from inotifywait -m -e create,delete,modify /var/www/html /var/www/html/ CREATE


new_file.php /var/www/html/ MODIFY index.html /var/www/html/ DELETE old_file.bak

// Example from ausearch -k etc_changes (after a change to


/etc/passwd)

type=SYSCALL msg=audit(1678886400.000:123): arch=c000003e syscall=2 success=yes exit=3


a0=7ffc00000000 a1=1 a2=1b6 a3=0 items=1 ppid=1234 pid=5678 auid=1000 uid=1000 gid=1000
euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="vim"
exe="/usr/bin/vim" subj=unconfined key="etc_changes" ```
🎯 What to Extract:
Event Type: What kind of change occurred (create, modify, delete, access).

File/Directory Path: The exact location of the file or directory that was affected.

Timestamp: When the event occurred.

User/Process (from audit logs): Who made the change and which process was involved.

Context (from audit logs): Detailed information about the system call and execution
environment.

📍 Where It’s Located: inotifywait provides real-time output to stdout. Audit logs are stored in
/var/log/audit/audit.log .

🤖 Why It Matters (CISO Lens):


Real-time Detection: Provides immediate alerts on critical file system changes, allowing for
rapid response to potential compromises.

Attack Chain Interruption: Can detect and potentially interrupt an attack in progress by
alerting on suspicious file modifications (e.g., web shell uploads, configuration changes).

Data Integrity: Helps ensure the integrity of critical system and application files by immediately
flagging unauthorized alterations.

Forensic Detail: Audit rules provide rich, detailed information about file system events, crucial
for post-incident analysis.

🧠 Strategic Usage Contexts:


Web Server Protection: Monitor web root directories ( /var/www/html , /srv/www ) for
unauthorized file uploads (e.g., web shells) or modifications.

Configuration Management: Alert on any changes to critical configuration files in /etc that
are not part of a planned change management process.

User Home Directory Monitoring: Monitor /home directories for suspicious activity, especially
in multi-user environments, to detect data exfiltration or malware staging.

Ransomware Detection: While not a primary defense, rapid detection of mass file
modifications could be an early indicator of ransomware activity.

SIEM Integration: Integrate inotifywait output or auditd logs into a SIEM for centralized
monitoring and correlation with other security events.

📘 Deep Teaching Section: Real-time file system monitoring complements periodic FIM scans by
providing immediate alerts on changes. inotifywait is excellent for simple, script-based
monitoring, often used to trigger immediate actions (e.g., quarantining a file, alerting an analyst).
However, it requires a running process and doesn't provide the same level of detail or tamper
resistance as the Linux Audit Framework. Audit rules, on the other hand, operate at the kernel level
and log comprehensive details about file access, including the user, process, and system call
involved. For a CISO, the strategic use of these tools involves identifying the most critical files and
directories (e.g., /etc for system configuration, web roots for web applications, user home
directories for sensitive data) and implementing appropriate real-time monitoring. This proactive
approach allows for faster detection and response to attacks that involve modifying files, such as
installing backdoors, altering configurations, or deploying malware. It's a key component of a
defense-in-depth strategy, providing an additional layer of visibility beyond traditional log analysis.

🔧 Tool/Command Name: Mapping log artifacts to MITRE techniques


🎯 What it is: The MITRE ATT&CK framework is a globally accessible knowledge base of adversary
tactics and techniques based on real-world observations. Mapping log artifacts (events recorded in
system logs) to specific MITRE ATT&CK techniques helps security teams understand how an attacker
might be operating, prioritize defensive measures, and improve detection capabilities. It provides a
common language for describing adversary behavior.

💻 Terminal Usage (Syntax):


This is not a direct terminal command but a conceptual framework applied to log analysis. The
process involves:
1. Identify a log artifact: e.g., a failed SSH login, a new process creation, a file modification.

2. Analyze the context: Who, what, when, where, why (if possible).

3. Consult MITRE ATT&CK: Browse the framework (e.g., attack.mitre.org) to find techniques
that match the observed behavior.

4. Document the mapping: Record which log events correspond to which ATT&CK
techniques.

Example Mapping: * Log Artifact: Multiple sshd entries in /var/log/auth.log showing Failed
password for invalid user from a single source IP. * MITRE ATT&CK Technique: T1110 - Brute
Force (specifically, T1110.001 - Password Guessing).

Log Artifact: A new, unexpected executable found in /tmp and executed, as seen in auditd
logs ( SYSCALL comm="malware" exe="/tmp/malware" ).

MITRE ATT&CK Technique: T1059 - Command and Scripting Interpreter (e.g., T1059.004 - Unix
Shell), T1036 - Masquerading (if named deceptively), T1547 - Boot or Logon Autostart Execution
(if configured for persistence).

Log Artifact: dmesg output showing usb 1-1: new high-speed USB device followed by file
access from a removable device.

MITRE ATT&CK Technique: T1091 - Replication Through Removable Media.

📤 Output (Labeled Example):


The output is typically a documented mapping, often in a security operations center (SOC)
playbook, a SIEM correlation rule, or an incident response report.

🎯 What to Extract:
Adversary Tactics: The high-level goals of the attacker (e.g., Initial Access, Execution,
Persistence).
Specific Techniques: The detailed methods used by the attacker (e.g., Brute Force, Process
Injection, File Deletion).

Detection Opportunities: How specific log events can be used to detect these techniques.

Defensive Gaps: Areas where current logging or monitoring might be insufficient to detect
certain techniques.

📍 Where It’s Located: The MITRE ATT&CK framework is publicly available online. The application
of this mapping occurs within security analysis workflows, SIEM rule development, and incident
response playbooks.

🤖 Why It Matters (CISO Lens):


Strategic Defense Planning: Helps CISOs understand the full spectrum of adversary behaviors
and prioritize defensive investments based on the most relevant threats.

Improved Detection Capabilities: By explicitly mapping log sources to ATT&CK techniques,


organizations can identify gaps in their detection coverage and develop more effective SIEM
rules and alerts.

Enhanced Communication: Provides a common language for security teams, management,


and external partners to discuss and understand adversary behavior.

Performance Measurement: Allows for measuring the effectiveness of security controls against
known adversary techniques.

Proactive Threat Hunting: Guides threat hunters in searching for specific adversary techniques
within their log data.

🧠 Strategic Usage Contexts:


Security Operations Center (SOC) Enhancement: Train SOC analysts to map observed events
to ATT&CK techniques, improving their understanding of adversary intent.

SIEM Rule Development: Design and refine SIEM correlation rules based on specific ATT&CK
techniques to improve alert fidelity and reduce false positives.

Red Team/Blue Team Exercises: Use ATT&CK as a common reference point for simulating
attacks (Red Team) and evaluating defensive capabilities (Blue Team).

Risk Assessment: Assess the organization's exposure to various ATT&CK techniques and
prioritize mitigation strategies.

Incident Response Playbooks: Develop incident response playbooks that are organized
around ATT&CK techniques, providing clear steps for detection, analysis, and containment.

📘 Deep Teaching Section: The MITRE ATT&CK framework has revolutionized how organizations
approach cybersecurity defense. It shifts the focus from simply detecting malware signatures to
understanding and detecting adversary behaviors. For a CISO, this means moving towards a more
proactive and intelligence-driven defense. By systematically mapping log artifacts to ATT&CK
techniques, an organization can gain a clear picture of its detection coverage. This process often
reveals that while an organization might be collecting a vast amount of log data, it may not be
effectively using that data to detect sophisticated attacks. For example, simply logging all SSH
attempts is good, but understanding that repeated failed attempts map to
T1110 - Brute Force allows for more targeted detection and response. This framework empowers CISOs to
build a more resilient and adaptable security program that can effectively counter evolving threats by
focusing on the adversary's methods rather than just their tools.

📡 4. Blue Team Recon / SIEM Readiness


This section focuses on practical examples of log analysis for blue team reconnaissance and preparing
these logs for ingestion into Security Information and Event Management (SIEM) platforms like Splunk or
ELK (Elasticsearch, Logstash, Kibana).

🔧 Tool/Command Name: Sample logs from Nmap scan ( journalctl ,


/var/log/auth.log )

🎯 What it is: Network scanning tools like Nmap are frequently used by attackers for reconnaissance
to discover open ports, services, and operating systems on target systems. While Nmap itself doesn't
directly log its activity on the target, the target system's logs will often show evidence of the scan
attempts. Analyzing these logs helps defenders identify and respond to reconnaissance activities.

💻 Terminal Usage (Syntax):


Simulate an Nmap scan (from an attacker's perspective): nmap -sS -p 1-1000
<target_IP> (SYN scan of common ports)

Check journalctl for connection attempts: journalctl --since "5 minutes ago" | grep
-i "connection from"

Check auth.log for SSH/authentication attempts: grep -i "sshd" /var/log/auth.log |


tail -n 20

📤 Output (Labeled Example): ``` // Example from journalctl showing Nmap activity (may vary
based on services running) Jul 25 11:00:01 my-server kernel: [UFW BLOCK] IN=eth0 OUT= MAC=...
SRC=192.168.1.200 DST=192.168.1.50 LEN=44 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=TCP
SPT=54321 DPT=23 WINDOW=1024 RES=0x00 SYN URGP=0 Jul 25 11:00:02 my-server kernel: [UFW
BLOCK] IN=eth0 OUT= MAC=... SRC=192.168.1.200 DST=192.168.1.50 LEN=44 TOS=0x00 PREC=0x00
TTL=63 ID=0 DF PROTO=TCP SPT=54322 DPT=80 WINDOW=1024 RES=0x00 SYN URGP=0 Jul 25
11:00:03 my-server sshd[9876]: Connection closed by authenticating user admin 192.168.1.200 port
54323 [preauth] Jul 25 11:00:04 my-server sshd[9877]: Did not receive identification string from
192.168.1.200 port 54324

// Example from /var/log/auth.log showing SSH attempts from Nmap (if SSH service is scanned) Jul
25 11:00:03 my-server sshd[9876]: Connection closed by authenticating user admin 192.168.1.200
port 54323 [preauth] Jul 25 11:00:04 my-server sshd[9877]: Did not receive identification string from
192.168.1.200 port 54324 ```

🎯 What to Extract:
Source IP Address: The IP address from which the scan originated.
Destination IP/Port: The target system and the ports being scanned.

Protocol: TCP, UDP, etc.

Timestamps: When the scan occurred.

Repeated Attempts: A high volume of connection attempts to various ports from a single
source IP within a short timeframe is a strong indicator of a scan.

Service-Specific Errors: Messages like "Did not receive identification string" for SSH often
indicate automated scanning tools rather than legitimate user attempts.

📍 Where It’s Located: journalctl (for systemd journal) and /var/log/auth.log (for SSH and
authentication-related events). Other service-specific logs (e.g., web server access logs) might also
show evidence.

🤖 Why It Matters (CISO Lens):


Early Warning of Attack: Reconnaissance is often the first phase of an attack. Detecting it early
allows defenders to prepare and strengthen defenses before an actual exploit attempt.

Threat Intelligence: Identifies potential adversaries and their source IPs, which can be used to
update firewalls, blocklists, and threat intelligence feeds.

Vulnerability Assessment: Helps identify which services are exposed and how they respond to
scanning, guiding vulnerability management efforts.

Attack Surface Management: Provides insights into how the organization's external-facing
systems appear to an attacker.

🧠 Strategic Usage Contexts:


Perimeter Defense: Implement firewall rules to rate-limit or block IPs that exhibit scanning
behavior.

SIEM Correlation: Create SIEM rules to correlate multiple connection attempts from a single
source to different ports as a

single reconnaissance event. * Honeypots: Deploy honeypots to attract and log scanning activity,
providing more detailed insights into attacker methods. * Threat Hunting: Proactively search logs for
patterns indicative of scanning activity that might not trigger automated alerts.

📘 Deep Teaching Section: Detecting Nmap or other scanning tools is a critical early warning for any
blue team. While a single scan might be benign (e.g., from a security researcher or a legitimate
vulnerability scanner), repeated or aggressive scanning from unknown sources is a strong indicator of
malicious intent. The key is to look for patterns: multiple connection attempts to different ports from
the same source IP within a short timeframe. Firewall logs (like UFW in the journalctl example) are
excellent for this, as they often log blocked connection attempts. SSH logs ( auth.log ) will show
failed authentication attempts or connection attempts that don't complete the SSH handshake,
which are common byproducts of Nmap's service detection. For a CISO, understanding how to detect
and interpret these reconnaissance attempts is vital for proactive defense. It allows for early
intervention, such as blocking the source IP, increasing monitoring on the scanned systems, or
initiating a deeper investigation, potentially disrupting an attack before it escalates.
🔧 Tool/Command Name: Failed SSH attempts ( grep 'Failed password'
/var/log/auth.log )

🎯 What it is: Failed SSH login attempts are a common occurrence on internet-facing Linux systems,
often indicating brute-force attacks where adversaries try to guess usernames and passwords.
Monitoring these attempts is crucial for detecting active attacks, identifying compromised
credentials, and understanding the threat landscape.

💻 Terminal Usage (Syntax):


View all failed password attempts: grep 'Failed password' /var/log/auth.log

Count failed attempts per IP address: grep 'Failed password' /var/log/auth.log | awk
'{print $11}' | sort | uniq -c | sort -nr

Count failed attempts per username: grep 'Failed password' /var/log/auth.log | awk
'{print $9}' | sort | uniq -c | sort -nr

View failed attempts for a specific user: grep 'Failed password for invalid user admin'
/var/log/auth.log

📤 Output (Labeled Example): ``` // Example from grep 'Failed password' /var/log/auth.log Jul 25
11:05:01 my-server sshd[10001]: Failed password for invalid user support from 203.0.113.100 port
45678 ssh2 Jul 25 11:05:02 my-server sshd[10002]: Failed password for root from 203.0.113.101 port
54321 ssh2 Jul 25 11:05:03 my-server sshd[10003]: Failed password for admin from 203.0.113.100
port 45679 ssh2 Jul 25 11:05:04 my-server sshd[10004]: Failed password for invalid user test from
203.0.113.102 port 33333 ssh2

// Example from counting failed attempts per IP 150 203.0.113.100 120 203.0.113.101 80 203.0.113.102

// Example from counting failed attempts per username 200 root 100 admin 50 support ```

🎯 What to Extract:
Source IP Addresses: Identify the IP addresses from which brute-force attacks are originating.

Targeted Usernames: Determine which usernames are being targeted (e.g., root , admin ,
common default users, or specific employee names).

Frequency and Volume: A high number of failed attempts from a single IP or targeting a single
user within a short period indicates an active attack.

Timestamps: Correlate failed attempts with other security events.

Invalid vs. Valid Users: Differentiate between attempts against invalid usernames
(reconnaissance) and valid usernames (targeted attack).

📍 Where It’s Located: /var/log/auth.log (Debian/Ubuntu) or /var/log/secure


(RHEL/CentOS).

🤖 Why It Matters (CISO Lens):


Direct Attack Indicator: Failed SSH attempts are a clear sign of an active attack against the
organization's perimeter.
Credential Compromise Risk: Persistent brute-force attacks increase the risk of credential
compromise, especially if weak passwords are in use.

Threat Intelligence: Provides valuable intelligence on attacker source IPs and targeted
accounts, which can be used to update defensive measures.

Policy Enforcement: Highlights the need for strong password policies, multi-factor
authentication (MFA), and account lockout mechanisms.

🧠 Strategic Usage Contexts:


Automated Blocking: Implement tools like fail2ban to automatically block IP addresses after
a certain number of failed login attempts.

SIEM Alerting: Create SIEM rules to alert security teams when a threshold of failed SSH
attempts is exceeded from a single source or against a single account.

Threat Hunting: Proactively analyze auth.log for patterns of failed attempts that might
indicate a sophisticated or distributed brute-force attack.

Vulnerability Management: Identify and prioritize systems with exposed SSH services that are
frequently targeted.

User Education: Use data on targeted usernames to educate employees about phishing and
social engineering tactics that might lead to credential exposure.

📘 Deep Teaching Section: Failed SSH attempts are a constant background noise on any internet-
facing Linux server. However, distinguishing between benign noise and malicious activity is crucial. A
sudden spike in failed attempts, especially from a single IP or a range of IPs, targeting common
usernames like root or admin , is a strong indicator of a brute-force attack. Attackers often use
dictionaries of common passwords or previously leaked credentials. For a CISO, the strategy is
twofold: prevention and detection. Prevention involves hardening SSH configurations (e.g., disabling
password authentication, using key-based authentication, changing default SSH port, implementing
strong password policies, MFA). Detection involves continuous monitoring of auth.log and
integrating these events into a SIEM for real-time alerting and correlation. The ability to quickly
identify and block these attackers can significantly reduce the risk of unauthorized access and serve
as an early warning for more sophisticated attacks that might follow if initial brute-force attempts are
successful.

🔧 Tool/Command Name: Suspicious cron jobs ( crontab -l , /var/spool/cron/* )

🎯 What it is: Cron is a time-based job scheduler in Unix-like operating systems. Users can schedule
commands or scripts to run automatically at specified intervals. Attackers often use cron jobs to
establish persistence on a compromised system, ensuring their malicious code runs regularly, even
after reboots or user logouts. Monitoring for suspicious cron jobs is a critical defensive measure.

💻 Terminal Usage (Syntax):


List current user's cron jobs: crontab -l

List root's cron jobs (as root): sudo crontab -l -u root


View system-wide cron jobs: ls -l /etc/cron.* (e.g., /etc/cron.daily ,
/etc/cron.hourly , /etc/cron.monthly , /etc/cron.weekly )

View individual user cron files: ls -l /var/spool/cron/crontabs/ (Debian/Ubuntu) or


/var/spool/cron/ (RHEL/CentOS)

View contents of a specific user's cron file: sudo cat /var/spool/cron/crontabs/username

📤 Output (Labeled Example): ``` // Example from crontab -l # m h dom mon dow command 0 0 * *
* /usr/bin/certbot renew --quiet @reboot /home/admin/start_my_app.sh */5 * * * * /tmp/.systemd-
service.sh

// Example from ls -l /var/spool/cron/crontabs/ -rw------- 1 admin crontab 123 Jul 25 10:00 admin -rw-
------ 1 root crontab 456 Jul 20 08:00 root ```

🎯 What to Extract:
Unexpected Entries: Any cron jobs that were not intentionally configured by system
administrators.

Unusual Paths: Scripts or commands executed from suspicious directories (e.g., /tmp ,
/dev/shm , hidden directories).

Obfuscated Commands: Commands that are heavily obfuscated or encoded, making their
purpose unclear.

Network Connections: Cron jobs that initiate outbound network connections to unknown or
malicious IP addresses.

Privileged Execution: Cron jobs running as root that perform suspicious actions.

Frequency: Jobs scheduled to run very frequently (e.g., every minute) that are not legitimate
system tasks.

📍 Where It’s Located: User-specific cron jobs are stored in /var/spool/cron/crontabs/


(Debian/Ubuntu) or /var/spool/cron/ (RHEL/CentOS). System-wide cron jobs are defined in
/etc/crontab and in files within /etc/cron.d/ , /etc/cron.daily/ , /etc/cron.hourly/ , etc.

🤖 Why It Matters (CISO Lens):


Persistence Mechanism: Cron jobs are a primary method for attackers to maintain access and
execute malicious code persistently on a compromised system.

Stealth: Malicious cron jobs can run in the background without direct user interaction, making
them difficult to detect without active monitoring.

Automated Malicious Activity: Can be used to automate data exfiltration, C2 communication,


cryptocurrency mining, or further compromise of the network.

Circumvention of Controls: Can bypass security controls that focus only on interactive
sessions.

🧠 Strategic Usage Contexts:


Compromise Assessment: During an incident, immediately check all possible cron locations
for newly added or modified entries.
Regular Audits: Periodically audit cron jobs on all critical systems, comparing them against a
baseline of known legitimate tasks.

Automated Detection: Develop scripts or SIEM rules to alert on changes to cron files or the
creation of new cron jobs by non-privileged users.

Security Hardening: Implement strict permissions on cron directories and files to prevent
unauthorized modification.

Threat Hunting: Look for cron jobs that execute from unusual locations, use suspicious
commands, or connect to external IPs.

📘 Deep Teaching Section: Cron jobs are a classic and highly effective persistence mechanism for
attackers. Because they are designed to run unattended, they can ensure an attacker's foothold on a
system even after reboots or if the initial exploit is patched. For a CISO, understanding the various
locations where cron jobs can be defined (user crontabs, system crontab, /etc/cron.d/ ,
/etc/cron.hourly/ , etc.) is crucial for comprehensive auditing. Attackers often try to hide their cron
entries by using obscure filenames, placing them in less-monitored directories, or using obfuscated
commands. Therefore, simply looking at crontab -l for the current user is insufficient. A thorough
check involves examining all system-wide and user-specific cron files. Any unexpected entry,
especially one that executes a script from a temporary directory or makes an outbound connection,
should be treated as highly suspicious and investigated immediately. Integrating cron job monitoring
into a SIEM can provide real-time alerts on such changes, significantly reducing the dwell time of an
attacker.

🔧 Tool/Command Name: Unauthorized users ( /etc/passwd , /etc/shadow checks)

🎯 What it is: The /etc/passwd and /etc/shadow files are critical system files that store user
account information and password hashes, respectively. Unauthorized modifications to these files
can indicate a severe compromise, such as the creation of rogue accounts, alteration of existing user
privileges, or password theft. Regularly auditing these files is fundamental for maintaining user
account security.

💻 Terminal Usage (Syntax):


View /etc/passwd content: cat /etc/passwd

View /etc/shadow content (requires root privileges): sudo cat /etc/shadow

Count users: cat /etc/passwd | wc -l

Check for users with UID 0 (root equivalent): `awk -F:

Check for empty passwords (no 'x' in shadow file): `sudo awk -F:

Compare current /etc/passwd with a known good baseline: diff /etc/passwd


/path/to/baseline/passwd

📤 Output (Labeled Example): ``` // Example from /etc/passwd root:x:0:0:root:/root:/bin/bash


daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin admin:x:1000:1000:Admin
User,,,:/home/admin:/bin/bash attacker:x:1001:1001:Attacker Account:/home/attacker:/bin/bash
// Example from /etc/shadow (truncated for brevity)
root:$ 6 $randomsalt$hashedpassword:19550:0:99999:7:::
admin:$ 6 $anothersalt$anotherhashedpassword:19550:0:99999:7::: attacker:!*:19550:0:99999:7:::

// Example from awk -F: root:x:0:0:root:/root:/bin/bash attacker:x:0:1001:Attacker


Account:/home/attacker:/bin/bash

// Example from awk -F: admin::19550:0:99999:7::: ```

🎯 What to Extract:
New or Unexpected Accounts: Any user entries that were not legitimately created.

UID 0 Accounts: Any user other than root with a UID of 0, indicating root-equivalent privileges.

Empty Passwords: Accounts with no password set (represented by an empty field or !! in


/etc/shadow ).

Password Hashes: While not directly readable, the presence or absence of a hash, or a changed
hash, indicates password status.

Unusual Shells: Accounts with unusual or non-existent login shells (e.g., /bin/false ,
/usr/sbin/nologin for interactive users).

Modified Existing Accounts: Changes to UIDs, GIDs, home directories, or shells of existing
legitimate users.

📍 Where It’s Located: /etc/passwd and /etc/shadow are located in the /etc/ directory.

🤖 Why It Matters (CISO Lens):


Account Compromise: Unauthorized accounts or modified privileges are a direct indicator of a
successful breach and potential persistence mechanism.

Privilege Escalation: Attackers often create new root-equivalent accounts or modify existing
ones to maintain high-level access.

Compliance: Many regulatory frameworks require strict control over user accounts and their
privileges.

Insider Threat: Can reveal malicious accounts created by insiders or compromised legitimate
accounts.

Audit Trail Integrity: Changes to these files can be an attempt to obscure an attacker's
presence or facilitate future access.

🧠 Strategic Usage Contexts:


Regular Audits: Implement automated scripts to regularly compare /etc/passwd and
/etc/shadow against a known good baseline and alert on any discrepancies.

Incident Response: Immediately check these files for unauthorized changes if a system
compromise is suspected.

User Account Lifecycle Management: Ensure that accounts are properly provisioned and de-
provisioned, and that dormant accounts are locked or removed.
Security Hardening: Implement strict file permissions on /etc/passwd and /etc/shadow to
prevent unauthorized modification.

Threat Hunting: Proactively search for unusual patterns in user creation or modification, such
as accounts created at odd hours or with suspicious names.

📘 Deep Teaching Section: The /etc/passwd and /etc/shadow files are the heart of user
management on Linux systems. /etc/passwd contains basic user information (username, UID, GID,
home directory, shell), while /etc/shadow stores encrypted password hashes and password aging
information. Because /etc/shadow contains sensitive password hashes, it is readable only by root.
For a CISO, these files represent a critical attack surface. An attacker who gains root access will often
modify these files to create a backdoor account with root privileges, or to change the password of an
existing account. Therefore, any unauthorized change to these files is a severe security incident. File
Integrity Monitoring (FIM) tools (like AIDE or Tripwire, discussed earlier) are excellent for detecting
changes to these files. Additionally, regular manual or scripted checks for unexpected UIDs, empty
passwords, or new accounts are essential. The awk commands provided are powerful for quickly
identifying specific anomalies within these files, enabling rapid detection of potential compromises.
Ensuring the integrity and confidentiality of these files is paramount for maintaining the security of
any Linux system.

🔧 Tool/Command Name: Structure for exporting logs into CSV/JSON for SIEM
🎯 What it is: Security Information and Event Management (SIEM) systems are central platforms for
collecting, analyzing, and correlating security logs from various sources across an organization's IT
infrastructure. To effectively leverage SIEM capabilities, Linux logs need to be exported or forwarded
in a structured format, typically CSV (Comma Separated Values) or JSON (JavaScript Object
Notation). This standardization allows SIEMs to parse, index, and analyze the data efficiently,
enabling real-time threat detection, compliance reporting, and forensic investigations.

💻 Terminal Usage (Syntax):


Exporting auth.log to CSV (basic example): bash grep "sshd" /var/log/auth.log | awk
-F' ' ' BEGIN { print "Timestamp,Hostname,Service,PID,Message" } { timestamp = $`1
" " `$2 " " $3; hostname = $4; service = $5; pid = substr($`6, 2, length(`$6)-2);
message = ""; for (i=7; i<=NF; i++) message = message $i " "; gsub(/,/, " ",
message); # Remove commas from message to avoid breaking CSV print timestamp ","
hostname "," service "," pid "," message } ' > /tmp/auth_log.csv

Exporting journalctl output to JSON: journalctl -o json > /tmp/journal_logs.json

Exporting journalctl output to JSON (pretty print for readability): journalctl -o json-
pretty > /tmp/journal_logs_pretty.json

Example of a more complex awk script for auth.log to CSV (handling different log
formats): bash # This script attempts to parse common auth.log formats for SSH
events awk -F' ' ' BEGIN { print
"Timestamp,Hostname,Process,PID,User,Source_IP,Port,Auth_Method,Status,Message" }
/sshd.*Accepted password/ { timestamp = $`1 " " `$2 " " $3; hostname = $4; process
= $5; pid = substr($`6, 2, length(`$6)-2); user = $9; source_ip = $11; port = $13;
auth_method = "password"; status = "Accepted"; message = ""; for (i=7; i<=NF; i++)
message = message $i " "; gsub(/,/, " ", message); print timestamp "," hostname
"," process "," pid "," user "," source_ip "," port "," auth_method "," status ","
message } /sshd.*Failed password/ { timestamp = $`1 " " `$2 " " $3; hostname = $4;
process = $5; pid = substr($`6, 2, length(`$6)-2); user = $9; source_ip = $11;
port = $13; auth_method = "password"; status = "Failed"; message = ""; for (i=7;
i<=NF; i++) message = message $i " "; gsub(/,/, " ", message); print timestamp ","
hostname "," process "," pid "," user "," source_ip "," port "," auth_method ","
status "," message } ' /var/log/auth.log > /tmp/auth_ssh_events.csv

📤 Output (Labeled Example): csv # Example /tmp/auth_log.csv content


Timestamp,Hostname,Service,PID,Message Jul 25 10:30:05,my-server,sshd,1234,Accepted
password for user admin from 192.168.1.100 port 54321 ssh2 Jul 25 10:31:00,my-
server,sshd,1235,Failed password for invalid user guest from 203.0.113.5 port 22 ssh2

```json

Example /tmp/journal_logs.json content


(truncated)
{ "_SYSTEMD_UNIT": "sshd.service", "_HOSTNAME": "my-server", "_COMM": "sshd", "_PID": "1234",
"MESSAGE": "Accepted password for user admin from 192.168.1.100 port 54321 ssh2",
"__REALTIME_TIMESTAMP": "1678886400000000", "__MONOTONIC_TIMESTAMP": "1234567890",
"_SOURCE_REALTIME_TIMESTAMP": "1678886400000000" } { "_SYSTEMD_UNIT": "sshd.service",
"_HOSTNAME": "my-server", "_COMM": "sshd", "_PID": "1235", "MESSAGE": "Failed password for
invalid user guest from 203.0.113.5 port 22 ssh2", "__REALTIME_TIMESTAMP": "1678886460000000",
"__MONOTONIC_TIMESTAMP": "1234567891", "_SOURCE_REALTIME_TIMESTAMP":
"1678886460000000" } ```

🎯 What to Extract:
Key Fields: Identify and extract critical fields such as timestamp, hostname, process name, PID,
user, source IP, destination IP, port, and the actual log message.

Event Type: Categorize events (e.g., login success, login failure, file modification, process
execution).

Contextual Information: Any additional data that provides context to the event (e.g.,
authentication method, command executed).

📍 Where It’s Located: The output files (e.g., /tmp/auth_log.csv , /tmp/journal_logs.json ) are
temporary locations for demonstration. In a real-world scenario, these would be streamed or
transferred to the SIEM system.
🤖 Why It Matters (CISO Lens):
Centralized Visibility: Enables the aggregation of diverse log data into a single platform for
holistic security monitoring.

Correlation and Analytics: Structured data allows SIEMs to correlate events across different
systems and identify complex attack patterns that might be missed by isolated log analysis.

Real-time Alerting: Facilitates the creation of real-time alerts for critical security incidents
based on predefined rules and thresholds.

Compliance Reporting: Simplifies the generation of audit trails and compliance reports
required by various regulatory bodies.

Automated Response: Enables automated responses to detected threats (e.g., blocking


malicious IPs, isolating compromised hosts).

🧠 Strategic Usage Contexts:


SIEM Deployment: When deploying a SIEM, prioritize the ingestion of critical Linux logs
(authentication, audit, system) in a structured format.

Log Forwarding Configuration: Implement log forwarding agents (e.g., rsyslog , filebeat ,
nxlog ) on Linux systems to stream logs to the SIEM in real-time.

Custom Parsing: Develop custom parsers within the SIEM for unique or application-specific
Linux logs to ensure all relevant data is ingested and analyzed.

Data Normalization: Work with security engineers to normalize Linux log data within the SIEM,
ensuring consistent field names and values for easier querying and correlation.

Threat Detection Rule Development: Leverage the structured data to create sophisticated
threat detection rules that combine information from multiple log sources (e.g., failed SSH
attempts followed by a process creation from an unusual directory).

📘 Deep Teaching Section: The transition from raw, unstructured log files to structured data formats
like CSV or JSON is a critical step in building an effective SIEM strategy. While manual grep / awk
commands are invaluable for ad-hoc investigations, they are not scalable for enterprise-wide log
management. SIEMs thrive on structured data, as it allows them to efficiently index, search, and
correlate events. journalctl -o json is particularly powerful because systemd journal already
stores logs in a structured, binary format, and this command directly outputs that structure as JSON,
preserving rich metadata. For traditional syslog files, awk and cut become essential for parsing. The
key challenge is to define a consistent schema for the exported data, ensuring that critical fields are
always present and correctly formatted. For a CISO, ensuring that logs from all critical Linux systems
are properly ingested into the SIEM is paramount. This provides the holistic visibility needed to
detect sophisticated attacks, manage vulnerabilities, and demonstrate compliance. It transforms raw
log data into actionable security intelligence, enabling the security team to move from reactive
firefighting to proactive threat detection and response.

You might also like