Module 3e
Module 3e
Observe
and adapt
Dwaine Snow
IBM Global Technical Leader - Cyber Resiliency
and Quantum-safe technology
In the coming years quantum computers will likely change the face of cyber security. Once Quantum computers can
reliably factor products of large prime numbers (the basis of current cryptography), existing cyber defense mechanisms
will be rendered obsolete. While Quantum computers are not strong enough (or stable enough) today to break into most
encrypted data files and systems using current encryption technologies. Everyone expects that they will be in the not to
distant future. Updating current systems, solutions/software, and infrastructure will take time, so it is important that
everyone start today to protect their current data and systems from the hackers who are stealing data today to break into
tomorrow.
1
Visibility
Observe and adapt
Many people wonder what the difference is between monitoring vs. observability. While monitoring is simply watching a
system, observability means truly understanding a system’s state. DevSecOps teams leverage observability to debug
their applications or troubleshoot the root cause of system issues. Peak visibility is achieved by analyzing the three
pillars of observability: Logs, metrics and traces. In simple words, observability is when you infer the internal state of a
system only by observing its external outputs. When translating this concept to software development and modern IT
infrastructure, a highly observable system exposes enough information for the operators to have a holistic picture of its
health.
Metrics are a numerical representation of data that are measured over a certain period of time, often leveraging a time -
series database. DevSecOps teams can use predictions and mathematical modeling on their metrics to understand what
is happening within their systems — in the past, currently and in the future.
The numbers within metrics are optimized to be stored for longer periods of time, and as a result, can be easily queried.
Many teams build dashboards out of their metrics to visualize what is happening with their systems, or use them to
trigger real time alerts when something goes wrong.
Traces help DevSecOps teams get a picture of how applications are interacting with the resources they consume. Many
teams that use microservices-based architectures rely heavily on distributed tracing to understand when failures or
performance issues occur.
Software engineers sometimes set up request tracing by using instrumentation code to track and troubleshoot certain
behaviors within their application’s code. In distributed software architectures like microservices -based environments,
distributed tracing can follow requests through each isolated module or service.
Logs are perhaps the most critical and difficult to manage piece of the observability puzzle when you’re using traditional,
one-size-fits-all observability tools. Logs are machine-generated data generated from the applications, cloud services,
endpoint devices, and network infrastructure that make up modern enterprise Information Technology (IT) environments.
While logs are simple to aggregate, storing and analyzing them using traditional tools like Application Performance
Monitoring (APM) can be a real challenge.
Note:
DevSecOps is the practice of integrating security testing at every stage of the software development process. It includes
tools and processes that encourage collaboration between developers, security specialists, and operation teams to build
software that is both efficient and secure.
2
Data detection and classification are often disconnected from the software development lifecycle. As a result, teams spot
missing encryption of sensitive data too late, usually once it is in the production environment. Unencrypted sensitive data
can flow through your systems for days, weeks, and even months before it is mitigated. If a breach occurs in the
meantime, the organization is at risk.
That’s why security teams need to implement and monitor data encryption throughout the software development
lifecycle. By scanning your code, organizations can discover and classify data, and detect missing security measures
such as data encryption. Not only once the application/product has been released to production, but while developers
are coding. This mitigates the risks of a potential data breach easily and put everyone’s mind at ease.
The goal is to collect security data from all aspects of the environment for analysts and administrators to manage and
monitor. A continuous security monitoring program starts to take shape when automated alerts and incident prioritization
create a pool of data within these systems.
Without the ability to make quick decisions for analysts based off a tuned, correlated and orchestrated technology stack
that's been refined with your risk posture, decisions are left open to human interpretation and misinterpretation. CSM
systems perform the leg work to enable skilled analysts to search, query and hunt through these programs and make
educated decisions. A continuous security monitoring program is not a replacement for a trained analyst, but a tool for
professionals to better perform their role.
Continuous security monitoring programs are always adjusting and tuning their technology, procedures and risk posture
to stay as agile and dynamic as possible. Attacks are fluid, and monitoring programs need to be as polished and flexible
in response.
Note:
DevSecOps is the practice of integrating security testing at every stage of the software development process. It includes
tools and processes that encourage collaboration between developers, security specialists, and operation teams to build
software that is both efficient and secure.
3
No one can predict the future
Be ready
to adapt