Automated Governance FINAL
Automated Governance FINAL
Governance Reference
Architecture
Attestation of the
Integrity of Assets
in the Delivery Pipeline
2019
25 NW 23rd Pl
Suite 6314
Portland, OR 97210
For further information about IT Revolution, these and other publications, special
discounts for bulk book purchases, or for information on booking authors for an
event, please visit our website at ITRevolution.com.
In May of this year, the fifth annual DevOps Enterprise Forum was held in Portland,
Oregon. As always, industry leaders and experts came together to discuss the issues
at the forefront of the DevOps Enterprise community and to put together guidance to
help us overcome and move through those obstacles.
This year, the group took a deeper dive into issues we had just begun to unpack
in previous years, providing step-by-step guidance on how to implement a move
from project to product and how to make DevOps work in large-scale, cyber-physical
systems, and even a more detailed look at conducting Dojos in any organization.
We also approached cultural and process changes like breaking through old change-
management processes and debunking the myth of the full-stack engineer. And of
course, we dived into the continuing question around security in automated pipelines.
As always, this year’s topics strive to address the issues, concerns, and obstacles
that are the most relevant to modern IT organizations across all industries. Afterall,
every organization is a digital organization.
This year’s Forum papers (along with our archive of papers from years past) are an
essential asset for any organization’s library, fostering the continual learning that is
essential to the success of a DevOps transformation and winning in the marketplace.
A special thanks goes to Jeff Gallimore, our co-host and partner and co-founder at
Excella, for helping create a structure for the two days and the weeks that followed to
help everyone stay focused and productive. Additional thanks goes to this year’s Forum
sponsor, XebiaLabs. And most importantly a huge thank you to this year’s Forum par-
ticipants, who contribute their valuable time and expertise and always go above and
beyond to put together these resources for the entire community to share and learn
from.
Please read, share, and learn, and you will help guide yourself and your organiza-
tion to success!
—Gene Kim
June 2019
Portland, Oregon
Disclaimer
The specific purpose of this paper is to introduce a reference architecture for the pur-
poses of automated governance. This paper represents the combined knowledge of the
authors’ experiences in what should make a good start to create a DevOps automated
governance process. We believe that this reference architecture will be a useful start-
ing point for many organizations; however, it is probably not efficient for enterprise
scale. It’s the authors’ expectation and hope that the DevOps community will engage
in ongoing improvement through rigorous validation and continuous feedback. Fur-
thermore, this paper is not intended to cover a comprehensive discussion regarding
policy in the delivery pipeline. Though some of the control points described in this
paper reference policy, we leave the instrumentation of that policy up to the reader.
The following definitions and assumptions are used as the basis for this paper and can
be modified to comply with each organization’s preferred terminology.
The complexity of most modern organizations makes it very difficult for one team
or even one person to understand a singular, comprehensive view of organizational
Common Terminology
Throughout this paper, we will be using specific terms to describe certain aspects
related to the DevOps automated governance reference architecture. Since the IT
industry tends to overload a lot of this common terminology, we decided to use a
common set of definitions for the purposes of this paper, listed below:
• Delivery Pipeline: This is the set of stages that describes how software will
flow from post ideation to final production delivery. We use this phrase for all
references related to industry terms, including but not limited to ARO, contin-
uous delivery and release automation, continuous integration and continuous
delivery, orchestration pipeline, release coordination, release management,
Dependency Artifact
Mgmt Repo
In May of 2018, Capital One wrote the blog post “Focusing on the DevOps Pipeline”
explaining what it means to “deliver high quality working software faster.”4 They
describe their policy as such:5
The blog post also describes the concepts of “gates,” or guiding design principles,
later described as “control points:”6
The design ensures that every time software is pushed through the pipeline these
control points will be evidenced.
Control points are a form of both metadata and evidence for actions taken during
the development, production, and promotion processes. These control points should
be defined at every phase of continuous integration and preserved in logs from the
build or logs from how an artifact was built. Ultimately, this kind of automated pipeline
metadata in the form of control points allows organizations to move to a decentralized
form of decision-making, thus moving away from centralized forms commonly used in
most enterprises.
DevOps practices increase the tempo of software delivery. This creates tension with
governance programs that rely on the manual review of artifacts, documents, and
A single control may record more than one attestation. These attestations begin as
ordinary tool outputs but need to be collected in a way that auditors can later verify
the origin and integrity of the attestation. (An example of a mechanism to collect such
attestations is the open-source tool Grafeas, discussed in the next section.)
Reference Architecture
Our reference architecture maps a delivery pipeline to specific controls that will pro-
duce evidence for collection. This reference architecture offers a starting point for a
DevOps automated governance process. Implementers will add to this architecture
and adapt it to their particular toolset and delivery pipeline.
In 2017, Google introduced an open-source initiative called Grafeas (the Greek word
for “scribe”) to help organizations define a uniform way to audit and govern a modern
Attestation
Hash or MAC
Recorder
Immutable
Record
Hashing and message authentication codes provide tamper resistance to the attes-
tation. This attestation can be recorded immutably. A series of attestations can be
connected to reconstruct the flow of a change through the delivery pipeline in much
the same way a call tree can be reconstructed from individual spans.
The Model
The model first describes a typical software delivery pipeline. For each stage the model
identifies a set of inputs, outputs, actors, and the actions that can occur at that stage.
Risks Controls
I/P O/P
Stage
Actors
Actions
To avoid repetition in every stage, the model factors out a set of common actors
and controls that appear in all stages.
Common controls:
• access control
• audit trail/log
• source control
• usage policies
Common Actors:
Currently, software is delivered via an automated and controlled process called a “pipe-
line.” (Figure 1, which you can find earlier in this paper, depicts a general reference for
a pipeline.) A typical pipeline is a set of pre-composed stages that integrate with many
tools and platforms to automatically send tested changes to production. The pipeline
is the heart of an end-to-end delivery life cycle.
Typical pipeline stages and related artifacts are listed below:
• Source code repository: A version control tool that hosts all assets related to an
application’s software and services. Organizations which are mature in DevOps
practices use version control for application, infrastructure (as code), tests, and
all configurations. (Note that these may not all reside in a single source tree.
Application code and production configuration are typically separated according
to access rights.) Every change in any code is version controlled. Typically, Git is
used to manage this repository.
• Build: In this stage, source code is compiled (when a compiled language is
used), unit-tested, scanned, and linted for quality and security.
• Dependency management: This stage is where external libraries and/or base
images (e.g., virtual machines or containers) are stored and from which they
are consumed internally. This is the entry point for outside code.
• Package: In this stage, the deployable artifact is composed from source code
and external dependencies. The artifact may be an archive file, a virtual machine
image, or a container image. The resulting package is uploaded to a binary arti-
fact repository.
• Artifact repository: This is a version control tool that hosts all packaged arti-
facts produced via the build and packaging stages. Artifacts in this repository
should be immutable.
• Non-prod deploy: In this stage the artifact is deployed to one or more
non-production environments where various tests are applied. There can be
one or more of these stages depending upon the testing needs.
• Prod deploy: This is the final stage of a typical pipeline where the tested and
approved artifact is finally deployed to the production environment. The actions
Risks Controls
1. Unapproved changes 1. Peer review
2. Untested changes 2. Unit test coverage
3. Unapproved 3rd party dependency 3. Clean dependency
4. Information (secrets) leakage 4. Scan for sensitive information
5. Low quality code sent to production 5. Static code analysis/linting
I/P O/P
Stage
Request for Change New Version
Actors
1. Code author
2. Code reviewer
Actions
1. Commit
2. Change request
(Pull request,
merge request)
3. Review, merge
The primary actors behind automated tools are the code author and the review-
er(s). The actions that instigate the checks associated with this pipeline stage are:
Input
• Request for change: A change in source code is initiated by a request for
change; a request for change can be a new feature request, a bug fix request, or
a request for refactoring or redesign.
Output
• New version: A new version of the code base. In Git terms, it is the new SHA.
Actors
• Code author: The person who is making the actual code change.
• Code reviewer: The person reviewing code changes.
• Repository admin: Also known as the owner(s) of the code repository, this
person is also responsible for merging the code change to the main code branch.
Actions
• Code commit: The actual code change pushed to a temporary place (such as a
fork or a branch).
• Code review: Peer reviewing code changes.
• Change request: A request to merge the changed code to the main branch. In
GitHub, this is called a pull request.
Risks
• Unapproved changes: Unapproved and unreviewed changes may cause
degraded and/or unwanted behavior of the service.
• Untested changes: Code changes that are not tested may cause degraded and/
or unwanted behavior of the service.
• Unapproved third-party dependency: Unapproved dependencies can intro-
duce legal and security vulnerabilities in the software.
Controls
We consider a source control scheme that involves a single master branch and the use
of feature branches to isolate and control code contributions. In order to merge the
changes captured by a branch into master, it is typical to require that the code passes
a number of checks.
• Peer-based code review (e.g., via GitHub Pull Requests) has been shown to
have the greatest impact on code quality.
• Unit test coverage (e.g., via SonarQube) is typically tracked, as untested func-
tionality tees up significant risk when refactoring, optimizing, or altering API
usage.
• Clean dependencies (e.g., via Sonatype Nexus) refers to a check that open-
source dependencies satisfy enterprise-level licensing guidelines and are free
of known vulnerabilities.
• Information leakage analysis (e.g., via GitHub pre-commit hooks to grep
for sensitive tokens) checks that passwords, access tokens, and other types of
sensitive information are not being checked into a repository.
• Static code analysis (e.g., via MuseDev) involves statically scanning for per-
formance, reliability, and security issues as part of the merge decision for new
code.
Stage: Build
I/P O/P
Build
1. New version of code 1. Artifacts
2. Dependencies 2. Build log
3. Build definition 3. BOM
Actors
1. System
Actions
1. Build
Input
• New version of code: Software is rebuilt on demand or automatically when
the source code version is changed.
• Dependencies: In many situations, when dependency versions change, soft-
ware needs to be rebuilt.
• Build definition: Build definition, also known as build script, contains the
codified build steps.
Output
• Artifacts: The main components of the deployment package that are sent to
production
Actions
• Build: The only action in this stage is execution of the build definition.
Risks
• Inaccurate, unapproved build configuration: An inaccurate, unapproved
build configuration may produce incorrect build artifacts.
• Missing, modified, inconsistent build information: The build artifact
might not be traceable, authentic, or reproducible.
• Unapproved third-party dependency: Inclusion of unapproved third-party
dependencies in the build stage may result in legal and security vulnerabilities.
• Build output is untested: The build output, when deployed to production,
may not function as expected.
• Build output has security vulnerability: The build output may contain a
security vulnerability.
Controls
• Build configuration in source control and peer reviewed: As a basic
DevOps practice of having “everything as code,” build configurations should be
source controlled and peer reviewed just as the application code.
• Immutable build and build output: To ensure that a build cannot be modi-
fied after the fact, every build and the output of the build should be immutable.
If any build fails for some reason or the build output is unreliable, a fresh build
should be initiated.
• Upstream approved dependency management system: To ensure that
every dependency downloaded is approved for use, the build system should be
restricted to use only on an approved dependency management system.
• Unit test: Every build should include unit test execution and should complete
successfully only if the unit test pass rate and coverage meet predefined criteria.
• Linting: Every build should scan the source code for code quality and should
complete successfully only if the analysis result meets predefined criteria.
Risks Controls
1. Unknown and potentially vulnerable 1. Download only from approved external
dependencies are being used sources
2. Dependencies may not have proper 2. License check
licensing 3. Security check
3. Dependencies may have security 4. Library quality check (age, community)
licensing 5. Approved versions
4. Dependencies may be low quality
5. Unapproved versions of dependencies
being used
I/P O/P
Build
1. External artifact 1. Artifact
2. Interal shared artifact
3. Enterprise usage policy
Actors
1. Legal
2. Security
3. Architects
4. Developer/engineer
Actions
1. Legal scan
2. Security scan
3. Manage usage policy
Output
• Artifact: Artifact that was requested by the build system.
Actors
• Legal: Enterprise legal teams create policies based on legal requirements.
• Information security: Security teams create policies based on security
requirements.
• Architects: Architects create policies based on architectural requirements that
include the health of the dependencies (e.g., age, popularity, activity status, etc.).
• Developers/engineers: Developers and engineers are the consumers and cre-
ators of dependencies.
• System: Systems, such as build systems, download dependencies.
Actions
• Legal scan: Dependencies are scanned for legal vulnerabilities.
• Security scan: Dependencies are scanned for security vulnerabilities.
• Manage usage policies: Legal, security, and architecture teams create depen-
dency usage policies.
Risks
• Unknown and potentially vulnerable dependencies are in use: One of the
biggest risks in software development is the risk of unknowns. This includes
the risk of using unknown dependencies that can cause damage in many forms.
Controls
• Download only from approved external sources: Every enterprise should
create a list of trusted sources of their dependency needs.
• License check: Dependencies that are downloadable from the dependency
management system should have licenses that satisfy enterprise legal require-
ments and usage policies.
• Security check: Dependencies that are downloadable from the dependency
management system should meet enterprise security requirements and satisfy
usage policies.
• Dependency quality check: Dependencies that are downloadable from the
dependency management system should meet architecture standards and
should satisfy usage policies.
• Approved versions: Only approved versions of dependencies are made
available.
Stage: Package
I/P O/P
Package
1. Artifacts 1. Artifact
2. Dependency
3. Configuration
4. Runtime
Actors
1. Engineers
2. System (automation)
Actions
1. Package
2. Scan
3. Upload to artifact repo
Input
• Artifacts: The build artifacts that need to be packaged.
• Dependency: Dependencies that are not packaged in build artifacts
• Configuration: Configurations that are required to run the software in an
environment
• Runtime: Any runtime that should be packaged for deployment. This may
include base images, virtual machines, etc.
Actors
• Engineers: Developers, operations admin, and system admin who contribute
to the packaging process.
• System: The automated way in which the packaging step is executed
Actions
• Package: The automated process that creates a deployable artifact.
• Scan: The process to scan the deployable artifact to detect legal and security
vulnerabilities.
Risks
• Unapproved, potentially vulnerable third-party dependencies are pack-
aged in the deployable artifact: Third-party dependencies downloaded during
the packaging process may contain vulnerabilities that cause legal and security
issues.
• Components with vulnerabilities are packaged in the deployable arti-
fact: Internal components produced by the build’s vulnerabilities may cause
security issues.
• Software configuration contains vulnerabilities: Even though the actual
software may not have vulnerabilities, configurations can contain data that do
not meet security standards. These may cause security issues.
• Untraceable software changes: Packaged artifacts containing changes that
cannot be traced back to source code or approved dependencies may cause
unpredictable behavior in the software.
• Unreliable metadata: Unreliable or missing artifact metadata may cause con-
fusion and at times can cause incorrect software to be deployed in production.
Controls
• Packaging only from trusted dependency sources: Packaging system
should download dependencies only from trusted dependency sources.
Risks Controls
1. Untrusted packaging source 1. Only allow upload from trusted
2. Artifact modified after packaging and packaging source
before production deploy 2. Immutable artifact
3. Loss of previously deployed software— 3. Retention policy
needed for legal audit purposes
I/P O/P
Artifact Repo
1. Artifact 1. Artifact
2. Metadata
3. Usage policy
Actors
1. Engineers
2. System (automation)
Actions
1. Upload
2. Download
Output
• Artifact: The artifact downloaded by the deployment system.
Actors
• Engineers: Developers, operations admin, and system admin who contribute
to the packaging process.
• System: The automated way in which the packaging step is executed.
Actions
• Upload: Uploading of artifacts by packaging system.
• Download: Downloading of artifacts by deployment system.
Risks
• Untrusted packaging store: An unknown or untrusted packaging store can
upload vulnerable and/or unapproved artifacts.
• Artifact modified after packaging and before deployment: If an artifact
can be modified before deployment, there will be no assurance of the integrity
of what will be deployed.
• Loss of previously deployed artifact: Most enterprises need to archive
older versions of software to meet legal and regulatory requirements
Controls
• Only allow upload from trusted packaging source: Configure the artifact
repository to accept upload requests only from known and trusted packaging
sources. Many enterprises restrict individual users from uploading to the arti-
fact repository.
• Immutable artifact: No artifact in the repository can be overwritten; only a
newer version of the same artifact can be uploaded.
Risks Controls
1. Deployment of software from untrusted 1. Only from trusted source (artifact
source repo/packaging)
2. Non-production systems do not have 2. Whitelist of allowed connectivity
approved network configuration 3. Whitelist of allowed data (e.g., no PII)
3. Non-production systems have real 4. Evaluation of testing/promotion/
production data quality gates
4. Promotion to non-production systems
did not have quality gates
I/P O/P
Non Prod Deploy
1. Artifact 1. Trusted release candidate
2. Environment config
3. Test data
4. Test config
5. Executable tests
Actors
1. Engineers
2. Product owners
3. Business
4. Security
Actions
1. Run tests
Input
• Artifact: The artifact that will be deployed.
• Environment configuration: Any environment configuration that was not
or cannot be packaged.
Output
• Trusted release candidate: A successful completion of this stage produces a
release candidate that can be trusted, provided all controls in previous stages
were in place and followed.
Actors
• Engineers: The developers, operations admin, and system admin who contrib-
ute to the packaging process.
• System: The system executes the packaging step in an automated way.
• Product owners, business partners: Product owners involved in testing
functionalities in the non-production environment.
• Information security: Information security team may run security related
tests in non-production environment
Actions
• Deployment: The process that deploys an artifact.
• Run tests: Various types of tests executed in the non-production environment
Risks
• Deployment of artifact from untrusted sources: There is the risk of test-
ing the wrong software before producing a release candidate.
• Non-production systems with unapproved network configuration:
With an unapproved network configuration, there is the risk of executing tests
with unpredictable results or untrusted results.
• Non-production systems with production data: In many enterprises,
non-production systems should never have production data due to legal and
privacy risks.
Controls
• Fetch artifact only from trusted source: The non-production deployment
stage should fetch deployable artifacts only from trusted sources (such as the
enterprise’s artifact repository).
• Whitelist of allowed connectivity: The connections allowed in the non-
production deployment stage should be reviewed and kept up-to-date. Pre-
approved connections on the whitelist should not be allowed.
• Whitelist of allowed data: The non-production deployment stage should
have access to only a set of whitelisted test data. This data should not contain
any real customer data or sensitive information.
• Quality gate evaluation: Non-production deployment should be executed
only if it meets a set of predefined criteria (e.g., 100% test pass rate with 80%
coverage, no new high severity security vulnerability, etc.). The quality gate
also should consider the drift and difference between production and non-
production environments; the non-production environment should mimic the
production environment.
I/P O/P
Non Prod Deploy
1. Artifacts 1. Service availability
2. Environment config
3. Rules for exposure
and progression
(aka deployment
strategy) Actors
1. Engineering
2. Product owner
3. Business
4. Security
5. Customer/users
Actions
1. Service use
Input
• Artifact: The artifact that will be deployed.
• Environment configuration: Any environment configuration that was not
or cannot be packaged.
Output
• Service availability: Availability of service with expected behavior.
Actors
• Engineers: The developers, operations admin, and system admin who contrib-
ute to the deployment process.
• System: The system executes the deployment stage in an automated way.
• Product owners, business partners: The product owners involved in mak-
ing decisions of production release readiness and testing out functionalities in
the production environment.
• Information security: The information security team runs security related
checks in production environment.
• Customers/users: Who uses the service.
Actions
• Production deployment: Execution of the deployment process.
Risks
• Deployment from untrusted sources.
• Production systems have unapproved configuration.
• Production systems lack vulnerability detection mechanism.
• Low quality software deployed to production.
• Lack of ability to detect and resolve production issues.
• Unauthorized changes to production systems.
• Unauthorized access to production systems.
• Lack of strategy around production system changes causing unexpected
behavior.
Controls
• Fetch artifact only from trusted source: The production deployment stage
should fetch deployable artifacts from only trusted sources (such as the artifact
repository).
Recording Attestations
Note Example:
{
“name”: “example_project/peer_review/note”,
“shortDescription”: “Approved commit record with documented
approver.”,
“attestationAuthority”: {
“hint”: {
“humanReadableName”: “github”
}
}
}
Occurrence Example:
{
“resourceUrl”: “${RESOURCE_URL}”,
“noteName”: “ example_project/peer_review/note”,
“attestation”: {
“peer_review”: {
“repo”: {“id”: XXXXXXXXX},
“pull_request”: {
“url”: “https://api.github.com/repos/.../pulls/1”,
Testing
(Performance, E2E, etc.)
Events as a Backbone
Control Points
(Checkmarx, BlackDuck, etc.)
Stream
Processor
Stream Stream
Processor Processor
Now that we have an ability to capture events, store attestations, and read/
enforce/report on attestations for a given release, we need to identify the control
sources. This example provides a list of control sources that will vary between com-
pany, solutions, and providers. Your enterprise might use different control sources
from the ones identified in Table 1.
Example
Stage Control Control Source Integration Elements
pull_request
Source Code Repo Pull Request GitHub Webhook
repository
actor pull_request
Source Code Repo Peer Review GitHub Webhook
repository
pull_request
Source Code Repo Static Code Analysis Muse Webhook
repository
Upstream Approved
Build Artifactory Jenkins TBD
Dependency
Static Security
Build Checkmarx Jenkins TBD
Analysis
Trusted Dependency
Package Artifactory Jenkins TBD
Store
Whitelist
Non-Prod Deploy Istio Jenkins TBD
Connectivity
JMeter, Karate,
Non-Prod Deploy Quality Gates Jenkins TBD
WebDriver
Trusted
Production Deploy GitHub Jenkins TBD
Configurations
Monitoring &
Production Deploy Elastic, PagerDuty Jenkins TBD
Alerting
Change
Production Deploy ServiceNow Jenkins TBD
Management
Production Access
Production Deploy Vault Jenkins TBD
Control
Grafeas can be used to query and report back governance status. The source for
metadata storage should be able to query attestations and allow developers, product
owners, and organizational leadership to shift left in identifying development gaps
for governance. Most importantly, this data can provide traceability for risk, audit,
and compliance partners.
Enforcement will utilize Kubernetes with a webhook that calls the Grafeas API to
retrieve the required occurrences of attestations after the production deploy. Although
the example suggests an ability for continuous release, the same model can also assert
a scheduled change approval through a digital signature in a change control system to
include manual releases as well.
To satisfy conditions 1 and 3, we will use a red or green light in a web UI. To sat-
isfy condition 2, we will use private keys to sign each step. To satisfy condition 4, we
will use a hashing algorithm that maintains separation of duties by copying data to a
database that the auditors control.
Architecture
To store information, we will use three separate servers. The first is a system of record
that contains transaction data. This includes controls, actors, actions, and output. The
second is a system of record that contains private and public keys, steps, and a final
hash. Finally, the third is a monitoring system that displays a red or green light by the
build ID. This system is illustrated below in Figure 12.
CI/CD Gate(s)
• Logs
• Sign Private Key Compare with
Public Key
• Final hash
• Data
{
“buildID”:”number”,
“buildDesc”:”string”,
“transactionID”:”string”,
“transaction”:[
{
“type”:”control, actor, output”,
“description”:”string”,
“parentHash”:”string”,
“currentHash”:”string”,
“timestamp”:”string”,
“pass”:”boolean”
},
{
“type”:”control, actor, output”,
“description”:”string”,
“parentHash”:”string”,
“currentHash”:”string”,
“timestamp”:”string”,
“pass”:”boolean”
}
]
}
Auditors would own the auditing database. This database would be used to verify
DevOps pipeline compliance with agreed upon standards. Two sample databases are
shown in Figure 13.
Final Boolean
The final piece in this proposed architecture is the monitoring system. The moni-
toring system uses two colors to signify authenticity of audit logs. A green circle means
the keys align and auditors can trust the output of the CI/CD pipeline JSON database.
A red circle means auditors cannot trust the output of one or more build steps. The
auditor can verify each step, if required. On the other hand, this verification system
gives auditors a quick mechanism to scan for compliance. This monitoring system
would indicate:
• The overall trustworthiness of all pipelines (green would mean all keys align).
• The overall trustworthiness of a full build (green would mean build keys align).
• The trustworthiness of a stage or step (green would mean a stage or step build
keys align).
A hashing algorithm is used in our example architecture to verify and maintain the
audit log state. A SHA256 digital key is associated with each control point to main-
tain security. The hashing algorithm takes multiple inputs and signs them with a
digital key to create a random hash of the output, which can be quickly verified via
a public key on the monitoring server. The generated hash is inserted as data in the
next hashing step. This is repeated for each stage in the software lifecycle. Once com-
plete, the final hash is inserted into the auditing database. This process is illustrated
in Figure 14.
New Hash New Hash New Hash New Hash Final Hash
The final hash can quickly determine if audit logs have been tampered with by
working backward and/or verifying the final hash.
A sample of entries, such as those below, could be used to create a hash:
With the advent of DevOps practices, more and more of the delivery pipeline is being
automated and decentralized. However, with these new automated and decentral-
ized models, organizations need to ensure common validation and trust mechanisms
throughout the continuous update process. In other words, an optimized process cre-
ates a signed output to authenticate the software development. Approved signatures
would be part of the automated pipeline process. This would give an organization
assurances that the automated continuous updates are certified by a known authority.
The process aims to create trust in an organization’s delivery pipeline.
This paper addresses the delivery pipeline. While this is a critical step in the overall
software supply chain, there is much room left to extend automated governance. We
regard this paper as the minimum viable product of automated governance. We look
forward to your feedback and to future work extending these techniques across orga-
nizational boundaries.
Fruend, Jack and Jack Jones. Measuring and Managing Information Risk: A FAIR
Approach. Oxford: Butterworth-Heinemann, 2015.
Pal, Tapabrata. “Focusing on the DevOps Pipeline.” Medium.com. May 16, 2018.
https://medium.com/capital-one-tech/focusing-on-the-devops-pipeline-topo-
pal-833d15edf0bd.
Simon, Fred, Yoav Landman, Baruch Sadogursky. Liquid Software: How to Achieve
Trusted Continuous Updates in the DevOps World. Sunnyvale: JFrog Ltd., 2018.
Sonatype. “2019 State of the Software Supply Chain.” Fulton, MD: Sonatype, 2019.
https://www.sonatype.com/hubfs/SSC/2019%20SSC/SON_SSSC-Report-2019_
jun16-DRAFT.pdf
We would like to thank all of our attendees and our friends at XebiaLabs
for helping to make this year’s Forum a huge success.