0% found this document useful (0 votes)
31 views63 pages

Slides 5

Uploaded by

DHANASEKARAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views63 pages

Slides 5

Uploaded by

DHANASEKARAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

What is the Google Cloud Digital Leader?

Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl

The Google Cloud Digital Leader is a fundamental cloud certification


that introduces you to the core concepts of cloud and Google Cloud
GCP Cloud Digital Leader Lecture Slides The certification will demonstrate a person can define and understand:
• Digital Transformation, Cloud Concepts, Core services such as Compute,
Storage, Databases and Networking, Security and Cost Management
These lecture slides are provided for personal and non-commercial use only.
This certification has no known course-code so we’ll call it the GCP-CDL-03
Please do not redistribute or upload these lecture slides elsewhere. • GCP-CDL-00 (prior 2022)
• GCP-CDL-01 (after 2022)
Good luck on your exam! • GCP-CDL-02 (after 2023 )
• GCP-CDL-03 (after 2024 )

Updated May 9, 2025 1

The GCP Roadmap How long to study to pass GCP-CDL?


Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl

Foundational Associate Professional

Beginner Experienced
Data Engineer Collaboration Engineer
I have no experience with GCP or cloud. I have practical experience working with GCP
Cloud Architect I have equivalent experience in other CSPs e.g. Azure, AWS
I have never written code or held a tech role
I have a strong background in technology

Cloud Developer
Cloud Security Engineer Machine Learning 20hours 5 hours
Cloud Digital Leader Cloud Engineer Engineer
12 hours (average study time)
• 50% lecture and labs
• 50% practice exams
Cloud DevOps Engineer
Cloud Network Engineer Cloud Database Recommended to study 1-2 hours a day for 14 days.
Engineer
What does it take to pass the exam? Where do you take the exam?
Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl

1. Watch video lecture and memorize key information


2. Do hands-on labs and follow along within your own account
3. Do paid online practice exams that simulate the real exam. At an in-person test center or online from the convenience of your own home.

AWS delivers exams via:


• Kryterion Online (online proctored exam system)
Signup and Redeem your FREE Practice Exam • Kryterion network of test centers
No credit card required
https://www.exampro.co/gcp-cdl

It is very hard to pass the GCP-CDL without practice exams! A “proctor” is a supervisor, or person who monitors students during an examination.

Exam Guide – Content Outline Exam Guide – Grading


Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl

17% 1. Digital transformation with Google Cloud


16% 2. Exploring Data Transformation with Google Cloud Passing Grade is *700/1000
16% 3. Innovating with Google Cloud Artificial Intelligence
You need to get “around” 70% to pass
17% 4. Modernize Infrastructure and Applications with Google Cloud
17% 5. Trust and Security with Google Cloud
17% 6. Scaling with Google Cloud Operations GCP uses Scaled Scoring

Each domain has its own weighting, this determines how many questions in a domain that will show up.
Exam Guide – Response Types Exam Guide – Duration
Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl

There are 50-60 Questions Duration of 1.5 hours


You can afford to get 18 questions wrong
You get 1.5mins per question
There is no penalty for wrong questions
Seat time refer to the amount of that
Exam Time is: 90mins you should allocate for the exam.
It includes:
Format of Questions Seat Time is: 120mins • Time to review instructions
• Multiple Choice • Show online proctor your workspace
• Multiple Answer • Read and accept NDA
• Complete the exam
• Provide feedback at the end.

Exam Guide – Valid Until Real Talk About Certifications and Goals
Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl

Cloud Certifications expect you already have a foundation of technical skills in:
• Programming and Scripting, SQL
• IT Networking
• Linux and Windows servers
You will need to add 250-500 hours of work to achieve full
• Project Management
• Developer Tools Developer Knowledge. Certifications assume you have

Valid for 36 months • Application Development Skills obtained these fundamental skills elsewhere.
• CompSci Algorithms
• and more…

3 years before recertification To fill technical gaps leverage freeCodeCamp large catalogue of general technical content.

To get skill-ready, and job-ready look at an ExamPro Supporter Subscription.

GCP itself does not care about GCP Certifications for hiring for their own technical roles.
Certifications serve a structured way of learning with a goal post.
What is Cloud Computing? The Evolution of Cloud Hosting
Dedicated Server
One physical machine dedicated to a single business.
cloud com·put·ing Runs a single web app/site.
Very Expensive, High Maintenance, High Security*
noun
the practice of using a network of remote servers hosted on the Internet to store, Virtual Private Server
One physical machine dedicated to a single business.
manage, and process data, rather than a local server or a personal computer. The physical machine is virtualized into sub-machines
Runs multiple web apps/sites
Shared Hosting
On-Premises Cloud Providers One physical machine shared by hundreds of businesses
Relies on most tenants under-utilizing their resources.
• You own the servers • Someone else owns the servers Very Cheap, Very Limited.
• You hire the IT people • Someone else hires the IT people
• You pay or rent the real estate • Someone else pays or rents the real estate Cloud Hosting
• You take all the risk • You are responsible for configuring cloud services and Multiple physical machines act as one system.
The system is abstracted into multiple cloud services
code; someone else takes care of the rest. Flexible, Scalable, Secure, Cost-Effective, High Configurability
13 14

What is Google? What is a Cloud Service Provider?

An American multinational technology cooperation A Cloud Service Provider (CSP) is a company that provides multiple Cloud Services,
headquartered in Mountain View, California and those Cloud Services can be chained together to create cloud architectures

Google was founded in 1996, and its claim The name of the Google search engine was a play
to fame was the Google Search Engine on the word "googol“
A googol means a large number, precisely 10100

10,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​00
0,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​
000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000.

GOOGLE is also an initialism for:


• Global Organization of Oriented Group Language of Earth

15 16
What is Google Cloud Platform? What is Google Workspace?

Google calls their cloud provider service offering Google Workspace is a bundled offering of SaaS products for
team communication and collaboration for an organization.
Google Cloud Platform formally known as G-Suite
Commonly referred to as GCP
Google Calendar Google Drive
A cloud-based team calendar Cloud storage for documents and files
Gmail Google Sheets
A cloud-based email client Real-time collaborative spreadsheets

The first product offered by GCP Google Meet


Google Docs
Video conferencing, screensharing
was App Engine back in 2008 Real-time collaborative word processor

Google Slides
Real-time collaborative presentations

17 18

Benefits of Cloud Computing Common Cloud Services

Cost-effective You pay for what you consume, no up-front cost. On-demand pricing or Pay-as- A cloud provider can have hundreds of cloud services that are grouped into various types of services.
you-go (PAYG) with thousands of customers sharing the cost of the resources The four most common types of cloud services for Infrastructure as a Service (IaaS) would be:

Global Launch workloads anywhere in the world. Just choose a region Compute Networking
Imagine having a virtual computer that Imagine having a virtual network being able to
Secure Cloud provider takes care of physical security. Cloud services can can run applications, programs, and code. define internet connections or network isolations
be secure by default, or you have the ability to configure access down
to a granular level.
Storage Databases
Reliable Data backup, disaster recovery, data replication, and fault tolerance Imagine having a virtual hard drive that Imagine a virtual database for storing
can store files reporting data or a database for general-
Scalable Increase or decrease resources and services based on demand purpose web application
Elastic Automate scaling during spikes and drop in demand GCP has over 60+ cloud services
Current The underlying hardware and managed software is patched, upgraded, The term “Cloud Computing” can be used to refer to all
and replaced by the cloud provider without interruption to you. categories, even though it has “compute” in the name.
19 20
Types of Cloud Computing Google’s Shared Responsibility Model
IaaS PaaS SaaS
SaaS Software as a Service For Customers The Shared Responsibility Model is a simple
visualization that helps determine what the
Content
Access Policies
A product that is run and managed by the service provider customer is responsible for and what Google is
responsible for related to GCP. Usage
Don’t worry about how the service is maintained. Deployment
It just works and remains available. The customer is responsible for the data and the Web application security
configuration of access controls that reside in GCP. Identity
PaaS Platform as a Service For Developers The customer is responsible for the configuration of Operations
Access and authentication
cloud services and granting access to users via
Focus on the deployment and management of your apps. permissions. Network security
Don’t worry about provisioning, configuring, or Guest OS, data & content
Google is generally responsible for the underlying
understanding the hardware or OS. Infrastructure.
Audit logging
Network

IaaS Infrastructure as a Service For Admins Responsibility in the cloud


If you can configure or store it, then you (the
Storage + encryption
Hardened Kernel + IPC
customer) are responsible for it. Boot
The basic building blocks for cloud IT. Provides access to
Responsibility of the cloud Hardware
networking features, computers, and data storage space.
If you can not configure it, then Google is
Don’t worry about IT staff, data centers, and hardware.
21 responsible for it Google’s Responsibility Customer’s Responsibility
22

Shared Responsibility Model Shared Responsibility Model


Let us take a look at compute as a comparison example of the Shared Responsibility Model
Infrastructure as a Service (IaaS)
Bare Metal Virtual Machine Containers Bare Metal Dedicate Host Virtual Machines (VM) Containers Functions
Compute Engine Compute Engine Google Kubernetes Engine (GKE) Sole-tenant Node Compute Engine Google Kubernetes Engine (GKE) Cloud Functions
Customer: Customer:
Customer: • Configuration of containers
• The Host OS Configuration • The Guest OS Configuration Code
• Container Runtime • Deployment of Containers
• Hypervisor • Storage of containers
Google Google App Container
• Hypervisor, Physical machine Google
• Physical machine • The OS, The Hypervisor, Container
Runtime Runtime
Platform as a Service (PaaS) Software as a Service (SaaS) Function as a Service (FaaS) OS
Word Processor
Managed Platform Functions
App Engine
Google Docs
Cloud Functions
Virtualization
Customer: Customer: Customer:
• Uploading your code • Contents of documents • Upload your code
• Some configuration of environment • Management of files Google Customer GCP
• Deployment strategies • Configuration of sharing access controls • Deployment, Container Runtime, Responsibility Level of Control Responsibility
• Configuration of associated services Google Networking, Storage, Security, Physical
Google • Servers, OS, Networking, Storage, Security Machine, (basically everything)
• Servers, OS, Networking, Storage, Security 23 24
Shared Responsibility Model Shared Responsibility Model
Configuration of Managed Services or Third-Party Software On-Premise Infrastructure as a Service Platform as a Service Software as a Service

Platforms Applications Applications Applications Applications Applications


Identity and Access Management (IAM)
Customer

Data Data Data Data


Configuration of Virtual Infrastructure and Systems
Runtime Runtime Runtime Runtime
Operating System Network Firewall
Middleware Middleware Middleware Middleware
Security Configuration of Data
OS OS OS OS
Client-Side Data Encryption Server-Side Encryption Networking Traffic Protection
Virtualization Virtualization Virtualization Virtualization
Customer Data
Servers Servers Servers Servers
Software
Storage Storage Storage Storage
Compute Storage Database Networking
GCP

Networking Networking Networking Networking

Hardware / Global Infrastructure


Region Zones / Fault Domains Physical Security 25
Legend: Customer is Responsible CSP is Responsible
26

Cloud Computing Deployment Models Cloud Computing Deployment Models

Public Cloud
Everything built on the Cloud Provider
Also known as Cloud-Native

Cross-Cloud
Private Cloud Using Multiple Cloud Providers
Everything is built on the company’s data centers Aka multi-cloud, “hybrid-cloud”
Also known as On-Premise
The cloud could be OpenStack

Anthos is GCP’s offering for a control plane for compute


Hybrid across multiple CSPs and On-premise environments
Using both On-Premise and
A Cloud Service Provider
27 28
Cloud Computing Deployment Models Total Cost of Ownership (TCO)
CAPEX OPEX
Cloud Hybrid On-Premise
On-Premise GCP
Fully utilizing cloud computing Using both Cloud and On-Premises Deploying resources on-premises, Software license Fees Subscription Fees
using virtualization and resource
management tools, is sometimes
called “private cloud”.

• Implementation • Implementation
• Configuration • Configuration
• Training • Training
• Physical Security
• Hardware
• IT Personal
• Maintenance

75% Savings
• Startups • Banks • Public Sector eg. Government
• SaaS offerings • FinTech, Investment Management • Super Sensitive Data eg. Hospitals
• New projects and companies • Large Professional Service providers • Large Enterprise with heavy regulation GCPs Responsibility
• Legacy on-premise eg. Insurance Companies 29 30

Capital vs Operational Expenditure Cloud Architecture Terminologies


Capital Expenditure (CAPEX) Operational Expenditure (OPEX)
Availability – Your ability to ensure a service remains available
Spending money upfront on physical infrastructure The costs associated with an on-premises data center that Highly Available (HA)
Deducting that expense from your tax bill over time. has shifted the cost to the service provider. The customer
only has to be concerned with non-physical costs.
Scalability – Your ability to grow rapidly or unimpeded
• Server Costs (computers) • Leasing Software and Customizing features
• Storage Costs (hard drives) • Training Employees in Cloud Services
• Network Costs (Routers, Cables, Switches) • Paying for Cloud Support Elasticity – Your ability to shrink and grow to meet the demand
• Backup and Archive Costs • Billing based on cloud metrics eg.
• Disaster Recovery Costs • compute usage
• Datacenter Costs (Rent, Cooling, Physical Security) • storage usage Fault Tolerance – Your ability to prevent a failure
• Technical Personal

With Operation Expenses, you can try a product or


Disaster Recovery – Your ability to recover from a failure
With Capital Expenses, you have to guess
upfront what you plan to spend service without investing in equipment Highly Durable (DR)
31 32
High Availability High Scalability

Your ability for your service to remain available by ensuring there is


*no single point of failure and/or ensure a certain level of performance Your ability to increase your capacity based on the increasing
demand of traffic, memory, and computing power

Cloud Load Balancing


A load balancer allows you to evenly
distribute traffic to multiple servers in one
or more data center. If a data center or
server becomes unavailable (unhealthy) the
load balancer will route the traffic to only
available data centers with servers. Vertical Scaling Horizonal Scaling
Scaling Up Scaling Out
Upgrade to a bigger server Add more servers of the same size
Running your workload across multiple Zones ensures that if 1 or 2
Zones become unavailable, your service/applications remain available. 33 34

High Elasticity Highly Fault Tolerant

Your ability for your service to ensure there is


Your ability to automatically increase or decrease your capacity based no single point of failure. Preventing the chance of failure
on the current demand of traffic, memory and computing power
Managed instance groups (MIGs) Failover is when you have a
Automatically increase or decrease in response to demand or a You can use Cloud DNS, which is a
defined schedule. plan to shift traffic to a DNS service that can detect a failing
redundant system in case primary system and failover to a
the primary system fails stand-by secondary system.

Horizontal Scaling
Scaling Out — Add more servers of the same size
Scaling In — Removing more servers of the same size

Vertical Scaling is generally hard for traditional architecture, so you’ll usually A common example is having a copy (secondary) of your database where all ongoing changes
only see horizontal scaling described with Elasticity. are synced. The secondary system is not in use until a failover occurs and it becomes the
35 primary database. 36
High Durability The Evolution of Computing

*Dedicated VMs Containers Functions


Your ability to recover from a disaster and to prevent the loss of data.
A solution to recover from disasters is known as Disaster Recovery (DR)
• A physical server wholly utilized by a single customer.
• You have to guess your capacity
• you’ll overpay for an underutilized server
• Do you have a backup? • You can’t vertically scale. You need a manual migration
• How fast can you restore that backup? • Replacing a server is very difficult
• Does your backup still work? • You are limited by your Host Operating System
• How do you ensure current live data is not corrupt? • Multiple apps can result in conflicts in resource sharing
• You have a *guarantee of security, privacy, and full utility of
underlying resources

37 38

The Evolution of Computing The Evolution of Computing

*Dedicated VMs Containers Functions *Dedicated VMs Containers Functions

• You can run multiple Virtual Machines on one machine. • Virtual Machines (VMs) can host container runtimes, such
• Hypervisor is the software layer that lets you run the VMs as Docker, which manage and run containers.
• A physical server shared by multiple customers • Docker Daemon is the software layer that manages
• You are to pay for a fraction of the server containers.
• You’ll overpay for an underutilized Virtual Machine • Containers maximize resource utilization and are more
• You are limited by your Guest Operating System cost-effective.
• Multiple apps on a single Virtual Machine can result in conflicts • Containers share the host OS kernel, making them more
in resource sharing efficient than running multiple VMs with separate OS
• Easy to export or import images for migration instances.
• Easy to Vertical or Horizontally scale • Containers provide process and resource isolation, allowing
multiple applications to run side by side without conflicts.
39 40
The Evolution of Computing Global Infrastructure
What is global infrastructure?
*Dedicated VMs Containers Functions Global infrastructure refers the global presence of datacenters,
networking and cloud resources available to the customer.

• 40 Regions
• Are managed VMs running managed containers. • 121 Zones
• Known as Serverless Compute • 187 Network Edge Locations
• You upload a piece of code and choose the amount • 200+ Countries
of memory and duration.
• Only responsible for code and data, nothing else
• Very cost-effective, only pay for the time code is
running, VMs only run when there is code to be
executed
• Cold Starts are a side-effect of this setup

41 Exterior of Google Cloud Datacenter Interior of Google Cloud Datacenter 42

Global Infrastructure - Regions Global Infrastructure - Regions


Regions are independent geographic areas that consist of zones.
Americas Europe
GCP has 40 Regions London (europe-west2)
Oregon (us-west1)
Los Angeles (us-west2) Belgium (europe-west1)
Salt Lake City (us-west3) Netherlands (europe-west4)
Las Vegas (us-west4) Zurich (europe-west6)
Iowa (us-central1) Frankfurt (europe-west3)
South Carolina (us-east1) Finland (europe-north1) When you are launching a new cloud
N. Virginia (us-east4) Warsaw (europe-central2) resource such as VM instance you
Montréal (northamerica-northeast1)
São Paulo (southamerica-east1)
will need to choose the region
Asia Pacific
Mumbai (asia-south1)
Singapore (asia-southeast1)
Jakarta (asia-southeast2)
Hong Kong (asia-east2)
Taiwan (asia-east1)
Tokyo (asia-northeast1)
cations 144 Network Edge Locations
Osaka (asia-northeast2)
Sydney (australia-southeast1)
Seoul (asia-northeast3) 43 44
Global Infrastructure – Edge Network Global Infrastructure – Zones

Edge networking is the practice of having compute and data storage resources A Zone is a physical location made up of one or more datacenter.
as closest as possible to the end user in order to deliver the lowest latency and to save bandwidth

Points of Presence (PoP) is an intermediate location between a GCP Region and the end user
A datacenter is a secured building that contains
This location could be a third-party datacenter or collection of hardware. hundreds of thousands of computers.

A region will *generally contain 3 Zones


Edge PoP
A location where a user can quickly enter (ingress) the
GCP Network for accelerated access to cloud resources Datacenters within a region will be isolated from
each other (so different buildings). But they will
CDN PoP be close enough to provide low-latency.
a location to serve (egress) cached website, files, assets
so they load very fast for the end user Its common practice to run workloads in at least
3 Zones to ensure services remain available in case
Cloud Media Edge
a location specialized for the delivery of media
one or two datacenters fail. (High Availability)
such as video content 45 46

Global Infrastructure – Zones Global Infrastructure — Resource Scoping

A Zone is a deployment area for Google Cloud resources within a region


Once you have chosen your Region, you will proceed to • Zones should be considered a single failure domain within a region.
• deploy redundant resource is multiple zones (multi-zones) for fault tolerance and high availability
choose your Zone or Zones when launching cloud resources.

Products/services can scoped based on

Zonal resource Regional resource Multi-regional resource


Resource resides in a Resource resides in Resource resides
single zone in a single region multiple zones in a single region across multiple specific zones

Global service Internal Services


Resources reside globally and regions Foundational services used by many other
and zones are abstracted away. services. You don’t interact with these services
directly, they are managed by Google.
Spanner, Colossus, Borg, and Chubby
47 48
Global Infrastructure – Data Residency Global Infrastructure – Cloud Interconnect
What is Data Residency? Cloud Interconnect provides direct physical connections
The physical or geographic location of where an organization or cloud resources reside. between your on-premises network and Google's network
What is Compliance Boundaries?
A regulatory compliance (legal requirement) by a government or organization that Cloud Interconnect enables you to transfer large amounts of data between networks, which can
describes where data and cloud resources are allowed to reside be more cost-effective than purchasing additional bandwidth over the public internet.
For workloads that need to meet compliance boundaries strictly defining
the data residency of data and cloud resources in GCP you can use: Cloud Interconnect has two offerings: Dedicated and Partner
Assured Workloads Dedicated Partner
a feature that allows you to apply various security controls to an environment: a direct physical connection between the a direct physical connection between the on-premises
• Data Residency on-premises network and Google’s network network and Google’s network through a trusted third-party
• Personnel data access controls based on attributes through a co-location facility
• Personnel support case ownership controls based on attributes
between 10 to 200 Gbps between 50 Mbps to 10 Gbps
• Encryption

A co-location (aka carrier-hotel) is a data center


You need update an Organization Policy called
where equipment, space, and bandwidth are
“Resource Location Restriction” and choose
available for rental to retail customers
allowed region or multi-regions 49 50

Global Infrastructure – Google Cloud for government Global Infrastructure – Google Cloud for government

What is Public Sector? • military • public education


• law enforcement • health care Federal Risk and Authorization Management Program (FedRAMP)
Public sectors include public goods • the government itself
• infrastructure a US government-wide program that provides a standardized approach to security
and governmental services such as: • public transit assessment, authorization, and continuous monitoring for cloud products and services.
Google Cloud can be utilized by public sector or organizations
developing cloud workloads for the public sector.
What is GovCloud?
A Cloud Service Provider (CSP) generally will offer an isolated region to run FedRAMP workloads.
Google Cloud achieves this by meeting regulatory compliance GovCloud offering in practice can result in degraded service offerings, lower service availability, and
programs along with specific governance and security controls higher operational cost

Google Cloud has an alternate offering to GovCloud where FedRAMP workloads are authorized in
GCP's usual region’s datacenters. This scheme mitigates the disadvantage of a GovCloud offering.

GCP Regions will be authorized for either High or Moderate baseline.

51 52
Basic Network Infrastructure Terminology Global Infrastructure - Latency

IP Address & Domain Name System (DNS) What is Latency?


IP Address: Unique numerical label identifying a device on a network. Latency is the time delay between two physical systems
Domain Name: User-friendly name mapped to an IP address (e.g., google.com).
DNS: "Phone book" of the web; translates domain names to IP addresses. What is Lag?
Lag is the noticeable delay between the actions of input and the
reactions of the server sent back to the client.
Internet Service Providers (ISPs) Global Network Infrastructure
Companies providing internet access Fiber Optics: Transmits data as light pulses over long distances.
to businesses and individuals. Subsea Cables: Carry 99% of international traffic; critical for Inter-Regional Latency Inter-Zonal Latency
E.g., Verizon, Softbank. global connectivity. The latency between regions The latency between zones residing in a single region

Google’s Global Reach Network Performance 10ms


500ms
Data Centers & Edge Locations: High-performance Latency: Speed of data transfer,
centers for apps (e.g., Search, YouTube). critical for user experience. triple digit double digit
Network Edge: Google’s fiber-optic network Bandwidth: Capacity of data transfer us-east-1 us-west-1 us-east-1-a us-east-b
connects data centers worldwide. across the network.

54

Innovation Waves Burning Platform


Kondratiev waves (also known as Innovation Waves) are hypothesized
cycle-like phenomena in the global world economy. Burning platform is a term used when a company abandons old technology for
The phenomenon is closely connected with Technology life cycles. new technology with the uncertainty of success and can be motivated by fear that
the organization future survival hinges on its digital transformation

A common pattern of a wave Each wave irreversibly changes the


change of supply and demand society on a global scale.
The latest wave is Cloud Technology
55 56
Evolution of Computing Power Digital Transformation
What is Computing Power?
The throughput measured at which a computer can complete a computational task. Digital Transformation is the adoption of digital technology to transform services or businesses through
• replacing non-digital or manual processes with digital processes (going paperless)
• replacing older digital technology with newer digital technology (adopting cloud technology)

Google’s Digital Transformation Concept

General Computing Tensor Computing Quantum Computing


Xeon CPU Processor Tensor Processing Unit 3.0 (TPUs) • Google Foxtail (2016)
50x faster than traditional CPUs • Google Bristlecone (2017)
• Google Sycamore (2018)
100 Million times faster
Google’s Cloud Service Offering Infrastructure Business applications Application Database and Smart Artificial Security
modernization platform portfolio modernization storage solutions Analytics Intelligence

Compute Engine Cloud TPU Google Quantum AI Google’s 7 Solutions Pillars


57 58

Google Cloud Solution Pillars Google Cloud Solution Pillars


1. Infrastructure modernization Anthos
Replacing legacy hardware and software systems with cloud solutions. Manage compute from both on- 5. Smart Analytics
Allows organizations to adopt hybrid architectures and have more premise and public cloud in a single When you store data on cloud service providers, you can tap into
infrastructure mobility choosing a mix of best cloud service provider offerings unified interface BigData and BI cloud offerings assisted by AI to help you analyze Data exploration and discovery business intelligence
for their organization's use-case. your data platform acquired by Google and now part of GCP
2. Business applications platform portfolio
• Cloud SDK
The backbone of cloud service providers (CSPs) are built on-top of robust, well documented APIs
• Cloud API Vertex AI
standardized across all offered cloud services. Organizations can focus on the configuration and 6. Artificial Intelligence
• Cloud CLI Unfied platform for, AI, ML, DL and AutoML
interconnections of various systems instead of having to build their own systems. AI, Deep learning, and Machine Learning are specialized domains that traditionally
• Google Cloud Documentation
required scarce and expensive subject matter experts. Cloud is commoditizing,
TensorFlow
3. Application modernization simplifying AI knowledge while driving costs lower for adoption.
A deep learning framework
Building web-applications on-top of cloud services allows organizations to globally
App Engine
deliver and rapidly iterate faster than ever before. CSPs offer automated deployment
Migrate your web-app over to App
pipelines, AI-powered code-reviews, Easy staging, and testing of new features. The
Engine. You just upload your code 7. Security
ability to test in-production, and rollback changes. Apps are more durable and can Identity and Access Management (IAM)
and mostly does the rest Cloud services by default have strong mechanisms built in for Security,
remain available even when facing catastrophic regional failure. Role-base-access-controls and user management
Governance, and Compliance.
4. Database and storage solutions CSPs are continually developing new and innovative security offerings not BeyondCorp
Most companies can tolerate losing application code, you can always rewrite. just at the service-per-service level, but to analyze, recommend and Zero trust model framework
Cloud Storage
Losing data is not something you can recover. remediate at the project and organization level.
Store files and documents as objects.
Cloud service providers have guaranteed SLAs of data durability, as well as You can easily and quickly audit and apply security controls to become Security Command Center
Availability 99.5% SLA
the ability to easily migrate and secure your data. compliant in a fraction of the time than an on-premise solution. Centralized visibility and control
59 60
Business Transformation Business Transformation Benefits
BigQuery
1. Intelligence (Data Cloud) A serverless, scalable data warehouse for fast
Business transformation is the process of using cloud technologies to change how a business operates
Replaces traditional systems with cloud-driven data solutions SQL queries and analytics.
and delivers value to its customers through that enable smarter decisions and unlock AI capabilities. Dataproc
• transforming workflows with cloud tools to automate and increase efficiency, and reduce costs Managed Spark and Hadoop for large-scale
data processing.
2. Freedom (Open Infrastructure)
• leveraging AI and data to innovate, improve customer experience, and unlock revenue streams Anthos
Shifts from legacy hardware to hybrid and multi-cloud architectures,
offering flexibility in infrastructure choice. Manage compute from both on-premise
and public cloud in a single unified interface
Google’s Business Transformation Concept
3. Collaboration Google Workspace
Transforms collaboration with cloud tools, enabling seamless, Includes Gmail, Docs, Drive, and Meet
secure communication for remote and hybrid teams. for real-time collaboration.

4. Trust (Trusted Cloud) Cloud Armor


Modernizes security with cloud-based protections, offering Protects against DDoS attacks with
better visibility and control over data. network security controls.
Intelligence Freedom Collaboration Trust Sustainability
(Data Cloud) (Open Infrastructure) (Trust Cloud) Carbon Footprint
5. Sustainability
Moves from energy-intensive hardware to Google’s carbon-free Monitors carbon emissions associated with
cloud infrastructure, reducing environmental impact. Google Cloud usage.
Google’s 5 Business Transformation Benefits

Transformation Cloud Leveraging Data for Business Value


Transformation Cloud is a set of cloud tools and services that help businesses modernize apps and Map and Categorize Your Data
infrastructure, improve collaboration, manage data better, and strengthen security and compliance. Customer info and behavior (e.g., purchases, returns).
Combine Datasets for Insights
App and Infrastructure modernization
Google Cloud helps businesses to shift from legacy systems to scalable, flexible cloud Merge different sources of data for better insights
infrastructure, with tools like Anthos and GKE supporting hybrid and multi-cloud setups. E.g., linking sales data with market trends to improve inventory.

Data Access & Efficiency (Data democratization)


Data-Driven Decisions
Google Cloud makes data more accessible and usable through services like BigQuery and Looker,
allowing users across the organization to make data-driven decisions Use data to find patterns and make informed choices
E.g., Offering personalized products based on customer demographics.
People Connections
Google Cloud boosts team collaboration with Google Workspace, offering cloud-based tools
Create New Value with Data
for seamless remote and hybrid work.
Apply data insights to enhance customer experiences and improve
Trusted Transactions Internal metrics External market trends efficiency in staffing, inventory, and sales decisions.
Google Cloud strengthens security with tools like Cloud Armor and BeyondCorp, (e.g., sales, staffing). and benchmarks.
safeguarding data and transactions while building user trust.
Data Value Chain Concepts Google Cloud Console
The data value chain is the series of processes where data is collected, stored, processed, The Google Cloud Console is a portal is a web-based, unified console that provides an alternative to command-line tools.
analyzed, secured, and ultimately used to generate business insights or value. Build, manage, and monitor everything from simple web apps to complex cloud deployments.

Ensuring data security


Gathering data from sources Converting raw data into useful with encryption and Turning data into business
like IoT devices, apps, or formats using tools like Google access control, using IAM value by selling insights or
customer interactions. Dataflow or Apache Beam. and Cloud Security tools. optimizing decisions.

Data
Data Data Data Data Data Data
Security and
Collection Storage Processing Analysis Sharing Monetization
Governance

Securely storing data in Extracting insights using Distributing insights


scalable cloud solutions tools like BigQuery ML via APIs, dashboards,
like Google Cloud Storage or Looker or Google Sheets.
or BigQuery.
66

Cloud SDK Cloud CLI


A Software Development Kit (SDK) is a collection of software
development tools in one installable package.
A Command Line Interface (CLI) processes commands
You can use the Cloud SDK to programmatic create, modify, delete to a computer program in the form of lines of text.
or interact with Google Cloud resources.
Operating systems (OS) implement a command-line interface in a shell or terminal
Google Cloud SDK is offered in
various programing languages:
• Java
• Python
• Node.js
• Ruby
• Go
• .NET
• PHP

67 68
Cloud Shell Google Cloud - Projects
Cloud Shell is a free online environment, with A Project in Google Cloud is a logical grouping of resources.
A cloud resource must belong to a project
• command-line access for managing your infrastructure
• online code editor for cloud development A project is made up of the:
• settings
• permissions
• other metadata
A project can't access another project's resources unless you use
Shared VPC or VPC Network Peering.

Resources within a single project can work together easily, for example
by communicating through an internal network, subject to the regions-
and-zones rules.
Each Google Cloud project has the following:
• A project name, which you provide.
• A project ID, which you can provide or Google Cloud can provide for you.
• A project number, which Google Cloud provides

As you work with Google Cloud, you'll use these


69
identifiers in certain command lines and API calls. 70

Google Cloud - Projects Google Cloud — Folders

• Each project ID is unique across Google Cloud.


• Once you have created a project, you can delete the project but its ID can never be used again. Folder allows you to logical group
• When billing is enabled, each project is associated with one billing account. multiple projects that share
common IAM permissions.
• Multiple projects can have their resource usage billed to the same account.
• A project serves as a namespace. Folders are commonly used to isolate
• This means every resource within each project must have a unique name, but you can usually projects for different departments or for
different environments.
reuse resource names if they are in separate projects

71 72
Google Cloud Adoption Framework GCAF — Themes
Google Cloud Adoption Framework (GCAF) is a whitepaper that Learn — The quality and scale of the learning programs you • Who is engaged?
• determines an organization’s readiness to adopt google cloud have in place to upskill your technical teams. • How widespread is that engagement?
Your ability to augment your IT staff with experienced partners. • How concerted is the effort?
• steps to fill in the knowledge gaps • How effective are the results?
• develop new competencies
What is a whitepaper? Lead — The extent to which IT teams are supported by a mandate • How are the teams structured?
A report or guide that informs readers concisely about a complex issue from leadership to migrate to cloud • Have they got executive sponsorship?
It is intended to help readers understand an issue, solve a problem, or The degree to which the teams themselves are cross-functional, • How are cloud projects budgeted,
collaborative, and self-motivated. governed, assessed?
make a decision.

Whitepaper are generally PDFs but can be in HTML format as well. Scale: The extent to which you use cloud-native services that • How are cloud-based services provisioned?
reduce operational overhead and automate manual • How is capacity for workloads allocated?
The GCAF is composed of: processes and policies. • How are application updates managed?
• 4 themes — Learn, Lead, Scale, Secure
• 3 maturity phases — Tactical, Strategic, Transformational
• Cloud Maturity Scale — Matrix of Themes and Phases Secure — The capability to protect your services • What controls are in place?
• Epics — Workstreams to scope and structure cloud adoption from unauthorized and inappropriate access with a • What technologies used?
multilayered, identity-centric security model. • What strategies govern the whole?
• Programs — Logical Grouping of Epics Dependent also on the advanced maturity of the
73 other three themes. 74

GCAF — Phases GCAF — Cloud Maturity Scale


Cloud Maturity Scale is a matrix made up of themes and phases
Tactical: (Short term) Individual workloads are in place, but no coherent plan. This helps your organization pinpoint their exact adoption position
The focus is on reducing the cost of discrete systems.
Learn Lead Scale Secure
Getting to the cloud with minimal disruption.
Adoption Theme
The wins are quick, but there is no provision for scale.

Strategic: (Mid Term) A broader vision governs individual workloads, which are designed and developed with an eye
Self taught Teams by function, Change is slow and risk Fear of public internet
to future needs and scale. Tactical
heroic project manager
short-term 3rd party reliance ops heavy trust in private network
Have begun to embrace change, and people and processes are now involved in the adoption strategy.
IT teams are both efficient and effective, increasing the value of harnessing the cloud for your business operations.
Strategic Organized training New cross-functional Templates ensure good Central identity
mid-term 3rd party assisted cloud team governance without hybrid network
manual review
Transformational: (Long Term) Cloud operations are functioning smoothly,
The focus is on integrating the data and insights working in the cloud. All change is constant Trust only the right
Peer learning and sharing Cross-functional feature
Transformational teams; greater autonomy low risk and quickly fixed people, device and
3rd party staff augmentation
Existing data is transparently shared. New data is collected and analyzed. long-term services
Predictive and prescriptive analytics via Machine Learning (ML) is used.
People and processes are being transformed, which further supports technological changes. Organizations Maturity
IT is no longer a cost center, but has become instead a partner to the business. 75 76
GCAF — Epics GCAF —Programs
Programs is a logical grouping of epics that correlate to themes to allow you to focus specific adoption efforts
When you’ve determine where your organization is in People
the adoption process using the Cloud Maturity Scale People Operations
then you need to define Epics.
Behaviours Communication
Training Program Change Management
Epics are workstreams to scope and structure Learn Lead
cloud adoption External Sponsorship
• epics are defined so that they do not overlap Experience
• they are aligned to manageable groups of Team-work
Identity &
stakeholders Upskills
Access
• they can be further broken down into individual Cost
user stories Data Mgmt. Control
Networking

Tech Architecture
Process
If you are limited in time and resources Cloud Operation Model Secure Account Setup
Incident
focus on the epics in the coloured segements Resource Infra as Code
Management Scale Secure
since these align with Learn, Lead, Scale, and Management CI/CD
Secure Instrumentation

77 78

GCAF — TAM Cloud Maturity Assessment


Cloud Maturity Assessment is a guided form to asses your organizations against the
Google Cloud Adoption framework along four themes - Learn, Lead, Scale and Secure
Technical Account Manager (TAM) is a human resource assigned to work
with your organization when paying for Google Cloud’s Premium Support You will get an email with your maturity phase
It is a simple multiple choice form

A TAM can assist with Google Cloud Adoption Framework by:


• performing a high-level assessment of your organization’s cloud maturity
• tell you how to prioritize your:
• training and change management programs
• partner relationships With additional information on how you compare to the average
• cloud operating model
• secure account configuration

79 80
Compute App Engine
Compute Engine Virtual Machines Bare Metal Solution
Create and deploy scalable, high-performance VMs. Providing hardware to run specialized workloads with low App Engine is a Platform as a Service (PaaS) for your application.
latency on Google Cloud.
App Engine Platform as a Service Quickly deploy and scale web-applications without worrying about the underlying infrastructure
Build and deploy apps on a fully managed, highly scalable Cloud GPUs Think of it like the Heroku of GCP
platform without having to manage the underlying Add GPUs to your workloads for machine learning,
infrastructure. scientific computing, and 3D visualization.
Sole-tenant nodes Dedicated Virtual Machines
Use your favourite programing language
Google Kubernetes Engine (GKE)
Help meet compliance, licensing, and • Node.js, Java, Ruby, C#, Go, Python, or PHP
Reliably, efficiently, and securely deploy and
scale containerized applications on Kubernetes. management needs by keeping your instances • Bring-Your-Own-Language-Runtime (BuoLR)
physically separated with dedicated hardware. • custom docker container

Cloud Functions Function as a Service (FaaS)


Powerful application diagnostics
Create serverless, single-purpose functions that respond to events.
• Cloud Monitoring and Cloud Logging to monitor the health and performance
• Cloud Debugger and Error Reporting to diagnose and fix bugs quickly
Google Cloud VMware Engine Preemptible VMs Application versioning — easily create development, test, staging, and production environments
Migrate and run your VMware workloads natively on Google Cloud. These VMs are cost-effective, short-lived instances Traffic splitting — Route incoming requests to different app versions, A/B test, and do incremental feature rollouts.
suitable for batch jobs and fault-tolerant workloads. Application security
Migrate to Virtual Machines • defining access rules with App Engine firewall
Migrate servers and VMs from on-premises or another cloud to Shielded VMs • leverage managed SSL/TLS certificates by default
Compute Engine. (Formerly Migrate for Compute Engine) These VMs provide enhanced security features to
81 82
protect against threats and vulnerabilities.

App Engine — Environments Containers


Google Kubernetes Engine (GKE) Cloud Build
App Engine has two types of environments: Flexible and Standard Reliably, efficiently, and securely deploy and scale Continuously build, test, and deploy containers using
containerized applications on Kubernetes. the Google Cloud infrastructure.
You can simultaneously use both environments for your application.
Artifact Registry
App Engine is well suited to applications that are designed using a microservice architecture Manage and secure your container images,
language packages, and other artifacts.

Standard serverless compute Flexible fully managed containers Container-Optimized OS


• starts in seconds • starts in minutes
You can deploy docker containers to any
• Runs in a sandbox • Runs within Docker Containers on Compute Engine (VMs)
• Compute Engine VM by enabling container mode
• designed for rapid scaling (sudden traffic spikes) designed for predictable and consistent traffic
• supports specific language versions, not custom runtime • supports generally any language version or run custom time
Cloud Run
• can scale to zero instances (scale to zero) • must have at least once instance running

Run stateless containers on a fully
• pricing based on hours pricing based on vCPUs, Memory and Disks
• cannot SSH to debug • can SSH to debug managed environment or on Anthos.
• no background processes • can have background processes
AI Platform Deep Learning Containers Kubernetes applications on Google Cloud Marketplace
Take advantage of containers preconfigured with Deploy prebuilt containerized apps.
data science frameworks, libraries, and tools.
83 Efficiently run batch jobs using Kubernetes. 84
Kubernetes Databases
Kubernetes is an open-source container orchestration system for BigQuery Serverless Data-Warehouse Cloud Spanner Fully-Managed Relational Database
A serverless, highly scalable data warehouse service A fully managed, globally distributed relational
automating deployment, scaling and management of containers. that allows you to store and analyze vast amounts of database service. It's designed for high availability and
structured data using SQL queries. It's ideal for data scalability while supporting standard SQL for
Originally created by Google and now maintained by analytics and reporting.. Built in ML! structured data.
the Cloud Native Computing Foundation (CNCF)
Cloud Bigtable No-SQL Key/Value store Cloud SQL Relational Database Service
A fully managed NoSQL database that excels in Moffers managed relational database services,
handling large-scale analytical and operational including MySQL, PostgreSQL, and SQL Server. It
Kubernetes is commonly called K8 workloads. It uses a wide-column data model and is simplifies database management tasks, making it easy
• The 8 represent the remaining letters “ubernete” suitable for high-throughput applications. to set up, manage, and scale your databases.

The advantage of Kubernetes over Docker is the ability to Firestore No-SQL Document database
run containers distributed across multiple VMs A NoSQL document database designed for mobile Memorystore In-Memory
and web applications. It stores data in semi- A managed in-memory data store service that provides
structured documents within collections, making exceptionally high-performance caching. It's designed to
A unique component of Kubernetes are Pods.
it easy to manage and query data. accelerate data retrieval and reduce latency.
A pod is a group of one more containers with shared
storage, network resources and other shared settings. Firestore Realtime Database Migration Service (DMS)
Store extends Firestore's capabilities by A serverless service that facilitates the seamless migration
providing real-time data synchronization. It's of databases to Google Cloud SQL with minimal
Kubernetes is ideally for micro-service architectures where a downtime. It simplifies the migration process for
company has tens to hundreds of services they need to manage perfect for applications that require live data
85
updates and collaboration features.. businesses. 86

What is a Database? What is Data Warehouse?


A database is a data-store that stores semi-structured and structured data.
A relational datastore designed for analytic workloads, which is generally column-oriented data-store
A database is more complex data stores because it requires using formal design and modeling techniques
Companies will have terabytes and millions of rows of data,
and they need a fast way to be able to produce analytics reports

Databases can be generally categorized as either: Data warehouses generally perform aggregation
• Relational databases • aggregation is grouping data eg. find a total or average
• Structured data that strongly represents tabular data (tables, rows and columns) • Data warehouses are optimized around columns since
• Row-oriented or Columnar-oriented they need to quickly aggerate column data
• Non-relational databases
• Semi-structured that may or may not distantly resemble tabular data. Data warehouses are generally designed be HOT
• Hot means they can return queries very fast even though
Databases have a rich set of functionality: they have vast amounts of data
• specialized language to query (retrieve data)
• specialized modeling strategies to optimize retrieval Data warehouses are infrequently accessed meaning they aren’t
for different use cases intended for real-time reporting but maybe once or twice a a day or
• more fine tune control over the transformation of once a week to generate business and user reports.
the data into useful data structures or reports A data warehouse needs to consume data from a relational
Normally a databases infers someone is using a relational row-oriented data store 87 databases on a regular basis. 88
What is a Key / Value store? What is a Document store?
A key-value database is a type of non-relational database (NoSQL) that uses a simple key-​value method to store data. A document store is a NOSQL database that stores documents as its primary data structure.
A document could be an XML but more commonly is JSON or JSON-Like
A key/value stores a unique Key Value Key values stores are dumb and fast. Document stores are sub-class of Key/Value stores
key alongside a value Data 1010101000101011001010010101001 They generally lack features like:
• Relationships The components of a document store compared to Relational database
Worf 0110101100010101010101011100010
• Indexes
Ro Laren 0010101001010110010101010101010 • Aggregation
Database (RDBMS) Database (Document)

Key Value A simple key/value store will Tables Collection


Data {species: android, rank: ‘lt commander’ } interpret this data resembling a
Rows Documents
Worf {species: klingon, rank: ‘lt commander’ }
dictionary (aka Associative arrays
or hash) Columns Fields
Ro Laren {species: bajoran, affiliation: ‘maquis’}

Key (Name) Species Rank Affiliation Indexes Indexes


A key/value store can resemble
tabular data, it does not have to Data andriod Lt commander Due to their simple design
Joins Embedding and Linking
have the consistent columns per Worf klingon Lt commander they can scale well beyond a
row (hence its schemaless) relational database 89 90
Ro Laren bajoran maquis

Serverless Services Storage


What is Serverless?
Serverless architecture are fully-managed services that automatically scale, are highly available,
durable and secure by default. Abstracts away the underlying infrastructure and are billed based on
the execution of your business task. Pay-for-Value (you don’t pay for idle servers ).
Serverless can Scale-to-Zero meaning when not in use they cost nothing.
Persistent Disk block-Storage
Cloud Functions Function as a Service
Cloud Storage object Storage
Cloud Run serverless containers Add block storage to VM instances.
Choose a runtime, upload single function code. Run stateless containers on a fully managed Store objects with global edge caching.
Intended to be short-lived. environment or on Anthos.
App Engine Platform as a Service Cloud Storage for Firebase Filestore file-system storage
Eventarc Serverless Event Bus
Build and deploy apps built using traditional web- Add Google-scale object storage
Build event-driven solutions by asynchronously Create fully managed, high-
frameworks. All the underlying infrastructure is taken and serving to your apps.
delivering events from Google services, SaaS, and your performance NFS file servers on
care of for you.
own apps. Used for application integration Google Cloud.
Knative Serverless K8 containers Workflows Serverless State Machine
Deploy and manage serverless, cloud- Orchestrate and automate Google Cloud and HTTP-
native applications for Kubernetes. based API services with serverless workflows.

BigQuery Serverless Data-Warehouse Cloud Storage Serverless Storage


Understand your data using a fully managed, highly Store objects with global edge caching.
scalable data warehouse with built-in ML. 91 92
Storage Cloud Storage
Block (Persistent Disk) Cloud Storage is a serverless object storage service.
Data is split into evenly split blocks You don’t have to worry about the underlying disks, right-sizing, availability
Directly accessed by the Operation System or durability. You only pay based on storage and download
Supports only a single write volume
When you need a virtual hard drive attached to a VM • Files are called Objects • Unlimited storage with no minimum object size.
• Folders are called Buckets • Worldwide accessibility and worldwide storage locations.
File (Filestore) • Low latency (time to first byte typically tens of milliseconds).
File is stored with data and metadata • High durability (99.999999999% annual durability).
Multiple connections via a network share
Supports multiple reads, writing locks the file. • Geo-redundancy if the data is stored in a multi-region or dual-region.
• A uniform experience with Cloud Storage features, security, tools, and APIs.
When you need a file-share where multiple users or
VMs need to access the same drive Available Storage Classes
• Standard Storage (0 day min) – when you are frequently using files. The least cost-effective
Object (Cloud Storage) • Nearline Storage (30 day min) – when you will only access a file once per month, cheaper than standard.
Object is stored with data, metadata and Unique ID • Coldline Storage (90 day min) – higher access cost than nearline store but lower at-rest cost
Scales with limited no file limit or storage limit • Archive Storage (365 day min) – very slow retrieval, very cost effective, rarely or never intended to be accessed
Supports multiple reads and writes (no locks)
Minimum storage duration is the minimum days a file needs to remain in a
When you just want to upload files, and not have to worry storage before deleting, if deleted prematurely a charge will occur
93 94
about underlying infrastructure. Not intended for high IOPs

Networking Networking
Virtual Private Cloud (VPC) is a logically isolated section of the Google Cloud Cloud Armor Cloud Load Balancing
Network where you can launch Google Cloud resources. Help protect your services against DoS Scale and distribute app access with high-
and web attacks. performance load balancing.
You choose a range of IPs using CIDR Range
CIDR Range of 10.0.0.0/16 = 65,536 IP Addresses Cloud CDN Cloud NAT
Cache your content close to your users using Google's Provision application instances without public IP
Subnets a logical partition of an IP network into multiple global network. addresses while allowing them to access the internet.
smaller network segments. You are breaking up your IP Traffic Director
Cloud DNS
range for VPCs into smaller networks. Deploy global load balancing across clusters and configure
Publish and manage your domain names using
Subnets need to have a smaller CIDR range than to the Google's reliable, resilient, low-latency DNS serving. sophisticated traffic control policies for open service mesh.
VPCs represent their portion.
Cloud Interconnect Cloud VPN
eg Subnet CIDR Range 10.0.0.0/24 = 256 IP Addresses
Connect your infrastructure to Google Cloud on your Securely extend your on-premises network to Google's
A Public Subnet is one that can reach the internet terms, from anywhere. network through an IPsec VPN tunnel.
Cloud Router
A Private Subnet is one that cannot reach the internet Dynamically exchange routes between your Google Cloud Virtual Private Cloud (VPC)
network and your on-premises networks using Border Gateway Protocol (BGP).

Network Intelligence Center Network Telemetry Network Service Tiers


Use a single console for comprehensive network Track network flows for monitoring, forensics, real- Optimize your network for
95 time security analysis, and expense optimization. performance or cost. 96
monitoring, verification, and optimization.
Networking BigQuery

BigQuery is a serverless, fully managed data warehouse and analytics engine


designed for fast, scalable data processing across clouds.
Private Google Cloud
allows your instances to reach Google APIs and services using an internal IP address rather than a public IP address BigQuery Studio is a workspace for managing Use familiar SQL in the editor to query
and analyzing data. data and run analyses.
Shared VPC
share subnets with other project. connect resources from multiple projects to a common VPC
Easily connect to various data sources and
BI tools for smooth data import.
VPC network peering
privately connect two VPC networks, which can reduce latency, cost, and increase security
Quickly browse datasets and tables
through the user-friendly interface.
Serverless VPC Access
allows Cloud Functions, Cloud Run (fully managed) services and App Engine standard environment
apps to access resources in a VPC network using those resources’ private IPs Use the UI, command line, or APIs, making
it simple for both beginners and experts.

97

BigQuery Data Workflow BigQuery Key Features


BigQuery processes raw data from multiple sources, refines it, and makes it accessible for analysis through tools
like Looker, Google Data Studio, and BigQuery ML, enabling data-driven insights and machine learning.

Serverless & Managed: No need for infrastructure management—Google handles scaling and updates.
Data Ingestion Data Analysis and ML
Real-Time & Flexible: Supports continuous data streaming and handles structured and unstructured data.
Data Prep and Storage BigQuery Multicloud Analytics: Analyze data across multiple clouds using open formats like Apache Iceberg.
BigQuery BigQuery ML
Efficient Storage & Querying: Columnar storage for fast queries, with built-in analytics and machine learning.

Cloud Storage Security & Governance: Centralized access control, encryption, and compliance tools.
BigQuery Looker Flexible Pricing: On-demand (BigQuery operates on a pay-per-query model) or reserved pricing to optimize costs.
Raw Refined
Google Sheets data data
BigQuery Google Data Studio
Databases
Partner BI Data Import: Import data in various formats like CSV, JSON, Avro, Parquet, and
Cloud Storage Services
Applications Google Sheets for analysis (Excel files aren't supported directly).

File Uploads Cloud AI Platform


Looker Looker Features

Unified Data Platform: Real-time access to multiple data sources for consistent, up-to-date
Looker is a business intelligence (BI) tool that allows users to explore, visualize, and
insights across teams.
share their company’s data to make informed business decisions. Personalization & Customization: Quickly create custom dashboards and reports (Looks)
for personalized insights.
Collaboration & Sharing: Easily share data and reports via email, links, or integrated tools.
Looker lets non-technical users analyze data through easy-to-use Development & Integration: Use LookML and APIs to customize data models and embed
dashboards and drag-and-drop tools, allowing them to create insights into other apps.
custom reports and insights without advanced data skills.

Looker offers three platform editions:


• Standard: Ideal for small teams with under 50 users.
• Enterprise: Designed for internal BI and analytics at scale.
• Embed: Ideal for external analytics and large-scale custom apps

BigQuery in Looker Internal Services


These are Google Cloud’s internal services. Internal services are the
BigQuery in Looker lets you explore and visualize BigQuery data directly within Looker,
underlying infrastructure to many Google cloud services
making it easy to analyze large datasets without needing complex queries.
Spanner
Globally-consistent, scalable relational database. Cloud Spanner is the external offering of this service.
Looker
Query BigQuery Data in Real-Time: Get insights without waiting or delays. Borg
Results back A cluster manager that runs hundreds of thousands of jobs, from many thousands of different
No Data Transfers: No need to move data, keeping workflows smooth and fast.
SQL in applications, across a number of clusters each with up to tens of thousands of machines.
Unified Access: Intuitive interface for exploring data without needing SQL skills.
Seamless Integration: Combine BigQuery’s power with Looker’s visualizations BigQuery Chubby
for fast, actionable insights. A distributed lock manager (DLM) as a service that temporarily prevents files and records from being
used by another user or operation on a Virtual Machine

Colossus
Cluster-level file system, successor to the Google File System (GFS) provides the underlying
infrastructure for all Google Cloud storage services, from Firestore to Cloud SQL to Filestore, and
Cloud Storage.
104
What is Apigee? API Management

Apigee Corp. was an API management and predictive analytics Apigee API Platform API Gateway Cloud Endpoints API Gateway
software provider before its merger into Google Cloud. Develop, secure, deploy, and monitor your APIs everywhere. Develop, deploy, and manage APIs on Google Cloud.
Expensive, but has many advanced features Cheap and simple, good integrations with App Engine

Apigee is a founding member of the OpenAPI Initiative API Analytics Developer Portal
• OpenAPI 3.0 Specification is originally known as the Swagger Specification Get insight into operational and business Create a lightweight portal that enables
metrics for your APIs. developers and API teams, using a
API Monetization turnkey self-service platform.
OpenAPI Specification is an open-source standard for writing Realize value from your APIs with a flexible, easy-
declarative structure of an Application Programming Interface (API) to-use solution.
Apigee Sense
It can be written in either JSON or YAML format. Add intelligent behavior detection to
protect APIs from attacks.
Cloud Service Providers (CSPs) will have a fully-managed API service
Apigee Hybrid
offering known as an API Gateway. Manage APIs on-premises, on Google Cloud, or in a hybrid
environment.
These API Gateways generally support the OpenAPI standard so you can
Cloud Healthcare API
quickly import and export your APIs Help secure APIs that power actionable healthcare
105 insights. 106

Data Analytics Dataproc vs Dataflow vs Cloud Data Fusion

BigQuery Dataproc Dataproc Open-source pipelines


Perform batch processing, querying, and streaming using a Apache Spark is known to be the
Understand your data using a fully managed, highly Perform batch processing, querying, and streaming
managed Apache Spark and Apache Hadoop service. fastest tool for ELT Jobs
scalable data warehouse with built-in ML. using a managed Apache Spark and Hadoop service.

Cloud Composer Google Data Studio


Create, schedule, monitor, and manage Tell great data stories to support better business decisions.
workflows using a fully managed orchestration Dataflow Fully-Managed Pipelines
service built on Apache Airflow. Uses Apache Beam. Fully-managed batch and streaming pipelines. You
don’t need to balancing work, scaling workers, or any other cluster
Pub/Sub
management
Dataflow Ingest event streams from anywhere, at any scale.
Develop real-time batch and stream data
processing pipelines. (Apache Beam)
Data Catalog Cloud Data Fusion Visually build Pipelines
Discover and understand your data using a fully managed and scalable A no-code enterprise solution for building ETL pipelines
Cloud Data Fusion
data discovery and metadata management service. via drag-and-drop interface
Quickly build and manage data pipelines using fully
managed, code-free data integration with a graphical
interface Cloud Life Sciences 150+ preconfigured connectors and transformation
Process, analyze, and annotate genomics and biomedical data at
Dataprep by Trifacta scale using containerized workflows.
Explore, clean, and prepare data for
analysis. 107 108
Developer Tools Developer Tools

Artifact Registry Cloud Source Repositories Tools for PowerShell Firebase Test Lab
Store, manage, and secure container images and language Manage code and extend your Git workflow by Use PowerShell to script, automate, and manage Test your mobile apps across a wide variety of devices
packages. connecting to Cloud Build, App Engine, Cloud Logging, Windows workloads running on Google Cloud. and device configurations.
Cloud Monitoring, Pub/Sub, and more.
Cloud SDK Tools for Visual Studio Firebase Crashlytics
Install a command-line interface to script and manage Cloud Scheduler Develop ASP.NET apps in Visual Studio on Google Cloud. Get clear, actionable insight into app issues.
Google Cloud products from your own computer. Schedule batch jobs, big data jobs, and cloud
infrastructure operations using a fully managed cron job Tools for Eclipse Tekton
Container Registry service. Develop apps in the Eclipse IDE for Google Cloud. Create CI/CD-style pipelines using Kubernetes-native
Store, manage, and secure your Docker container images. building blocks.
Gradle App Engine Plugin
Cloud Tasks Build your App Engine projects using Gradle.
Cloud Code Asynchronously execute, dispatch, and deliver Workflows
Extend your IDE with tools to write, debug, and deploy distributed tasks. Maven App Engine Plugin Orchestrate and automate Google Cloud and HTTP-
Kubernetes applications. Build and deploy your App Engine projects using Maven. based API services with serverless workflows.
Cloud Code for IntelliJ
Cloud Build Debug production cloud apps inside IntelliJ. Eventarc
Continuously build, test, and deploy containers, Java Build event-driven solutions by asynchronously
archives, and more using the Google Cloud infrastructure. delivering events from Google services, SaaS, and your
109 own apps. 110

Hybrid and Multi-Cloud Internet of Things


Anthos
Modernize existing apps, and build new apps rapidly in hybrid and multi-cloud environments, while enabling consistency Internet of things (IoTs) are physical objects embedded with sensors, software and
between on-premises and cloud environments.
other technologies that stream data to cloud services or other edge devices
Anthos deployed on VMware
Modernize existing apps and build new apps on your VMware environments.
An Edge device is a device that is an entry point to a service provider network.
Anthos GKE
Deploy, manage, and scale containerized applications on Kubernetes, powered by Google Cloud.
Anthos Config Management IoT Core
Automate policy and security at scale for your hybrid Kubernetes deployments. Securely connect and manage IoT
Cloud Run for Anthos devices using a fully managed service.
Easily leverage the benefits of combining Kubernetes and serverless

Apigee API Management


Develop, secure, deploy, and monitor your APIs everywhere.
Drones
Google Cloud Marketplace for Anthos
Easily deploy containerized apps that feature prebuilt deployment templates and consolidated billing.
Migrate for Anthos
Migrate VMs from on-premises or other clouds directly into containers in GKE. Smart Plant
Health Sensor Video Security IoT Kits
Operations
Aggregate metrics, logs, and events from your infrastructure to get signals and to speed analysis.
Traffic Director Conversational AI Temperature Control
111 112
Deploy global load balancing across clusters and configure sophisticated traffic control policies for open service mesh. Home Assistant
Cloud Deployment Manager Media and Gaming
Infrastructure as Code (IaC) is the process of managing and provisioning cloud services through machine-readable definition
files (eg, YAML, JSON files) rather manual configuration

Cloud Deployment Manager is Google cloud’s IaC service. Game Servers


Write YAML files configuration file to define your infrastructure, and deploy resources using the gcloud CLI Deliver seamless multiplayer gaming experiences to a global player base.
• Fully manages Agones, an open source game server management
project that runs on Kubernetes.

OpenCue
Manage complex media rendering tasks using an open source
render manager.

Transcoder API
Convert video files and package them for optimized delivery to web, mobile
and connected TVs.

113 114

Operations Suite Other Google Products

Google’s Operations Suite allows you to monitor, log, trace, and profile your apps and services.

Cloud Monitoring
Cloud Monitoring provides visibility into the
performance, availability, and overall health of cloud- Google Maps Platform
powered applications. Integrate static and dynamic maps
Service Level Monitoring
into your apps.
Define and measure availability, performance and other
service levels for cloud-powered applications.
Chrome Enterprise
Cloud Logging and Error Reporting
Use Chrome management policies to
Application Performance Management (APM)
meet productivity and security needs.
Cloud Logging Cloud Trace
Store, search, analyze, monitor, and alert on log data Find performance bottlenecks in production.
and events from Google Cloud and AWS. Cloud Debugger
Error Reporting Investigate code behavior in production.
Identify and understand application errors. Cloud Profiler
Continuously gather performance information using a low-
impact CPU and heap profiling service. 115 116
Firebase Pub/Sub
Firebase is Google’s fully-managed platform for rapidly
Google Cloud Pub/Sub is a messaging service that lets different applications
developing and deploying web and mobile applications.
communicate by sending and receiving messages in real-time.
Platform as a Service utilizing Serverless technology
Streaming & Real-Time Processing: Pub/Sub supports low-latency messaging for real-time
Firebase offers the following services and features:
data pipelines, enabling fast event-driven workflows.
• Cloud Firestore • Test Lab Publisher & Subscriber Model: Publishers send events to a topic, and subscribers receive
• Machine Learning • App Distribution
them asynchronously, separating event creation from processing.
• Cloud Functions • Google Analytics
• Authentication • In-App Messaging Seamless Integrations: Works with tools like Dataflow for streaming, BigQuery for
• Hosting • Predictions analytics, and Cloud Storage for distribution.
• Cloud Storage • A/B Testing
Use Cases: Ideal for real-time application data, IoT streams, cache refreshing, load
• Realtime Database • Cloud Messaging
• Crashlytics • Remote Config balancing, and database replication.
• Performance Monitoring • Dynamic Links
There are two service options
Firebase is an alternative to Google Cloud for users who want to focus on building and • Standard Pub/Sub: Fully managed, reliable, and auto-scalable.
deploying their application in a highly opinionated framework. • Pub/Sub Lite: Lower-cost, with manual capacity management and zonal storage.
117

Pub/Sub message lifecycle Dataflow

Dataflow is a unified stream and batch data processing that's serverless, fast, and cost-effective

1. A publisher sends a message to a Pub/Sub topic.


2. The message is stored. • Stream analytics — ingest, process, and analyze
3. Pub/Sub delivers the message to all topic fluctuating volumes of real-time data for real-time
business insights
subscriptions (in this case, one subscription).
• Real-time AI — streaming events to Google Cloud’s
4. The subscription forwards the message to a Vertex AI and TensorFlow Extended (TFX)
subscriber application. • ML Use cases, predictive analytics, fraud
5. The subscriber processes the message and sends detection, real-time personalization, anomaly
detection
an acknowledgment. Once the message is • supported with CI/CD for ML through
acknowledged, Pub/Sub deletes it from storage. Kubeflow pipelines
• IoT Streaming — Sensor and log data processing
Dataflow Migration

DataFlow SQL — use your SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI. Database Migration Service (DMS) When you’re migrating open-souce relational databases
Serverless, easy, minimal downtime migrations to Cloud SQL.
Flexible Resource Scheduling (FlexRS) — advanced scheduling techniques to reduce batch processing costs
Dataflow templates — easily share your pipelines across your organization and team
BigQuery Data Transfer Service
Vertex AI Notebook Integration Automate scheduled data movement into BigQuery using a fully managed data import service.
Private IPs — disable public IP and operate within the GCP network for added security
Horizonal scaling — automatically scales
Apache Beam — Integrate with Apache Beam Migrate to Virtual Machines formerly Migrate for Migrate for Anthos
Compute Engine When you’re migrating containers
When you’re migrating VMs Migrate VMs from on-premises or other
What is Apache Beam? Migrate servers and VMs from on-premises or clouds directly into containers in GKE.
An open source, unified model for defining both batch and streaming data-parallel processing pipelines another cloud to Compute Engine.

DataFlow Prime — serverless, no-ops, auto-tuning architecture Transfer Appliance


Cloud Storage Transfer Service
Vertical Autoscaling — don't have to spend days determining the optimum configuration of resources for your pipeline When you have TBs of data, and its
When you are migrating storage data
Right Fitting — custom resource configuration for each stage of the data pipeline, reducing waste faster to ship physical drives
Transfer data between cloud storage services
such as AWS S3 and Cloud Storage. Ship large volumes of data to Google
Cloud using trackable storage. 122

Key Cloud Migration Terms Types of Migration


There are four main types of migrations
from on-premises to the cloud:
• Rebuild your app from scratch
• Lift workloads with minor adjustments
• Take advantage of the maximum
Workload: The applications, services, or tasks that you plan to migrate to the cloud. • Leverage more cloud features without major changes.
value of cloud offerings
Retire: Decommissioning workloads that are no longer needed or useful. • Slower than rehost but gains more cloud benefits.
• Can take the longest amount of time
Retain: Keeping workloads in their current environment, typically due to complexity
or compliance reasons.
Easier to Implement Rehost Replatform Refactor Rebuild Labour Intensive
Limited Cloud Benefits (Lift and Shift) (Lift and Optimize) (Move and Improve) (Rip and Replace) Full Cloud Benefits

• Little to no modification • Refactor your existing application to fit cloud-native features


• Taking least advantage of the cloud • Take advantage of most of the cloud offerings
• Fastest migration strategy • Slower migration process but offers better performance and scalability.
Types of Migration — Rehost (Lift and Shift) Types of Migration — Replatform (Lift and Optimize)

Rehost (Lift-and-Shift)
Move workloads from a source environment to a target Replatform (Lift and Optimize)
environment with minor or no modifications or refactoring. Move workloads to the cloud with minor optimizations for cloud benefits.

Ideal when
Ideal when
• a workload can operate as-is in the target environment
• The app can run in the cloud but needs slight optimization.
• little or no business need for change
• Small changes improve cloud performance or cost-efficiency.
Considerations
Considerations
• Requires the least amount of time because the amount of refactoring is kept to a minimum
• Requires more effort than lift-and-shift.
• Team can continue to use the same set of tools and skills that they were using before
• Optimizations might need code or configuration changes.
• Doesn’t take full advantage of cloud platform features:
• Learning new cloud features or tools may be necessary.
• horizontal scalability
• fine-grained pricing
• highly managed services

Types of Migration — Refactor (Move and Improve) Types of Migration – Rebuild (Remove and Replace)

Refactor (Move and Improve) Rebuild (Remove and Replace; sometimes called Rip and Replace)
Modify and modernize the workload while migrating decommission an existing app and completely redesign and rewrite
to take advantage of cloud-native capabilities it as a cloud-native app

Ideal when
• Current app architecture isn’t cloud-ready. Ideal when
• Major updates or improvements are needed for performance. • current app isn't meeting your goals
• you want to remove legacy technical debt
Considerations
• Takes more time than basic migration. Considerations:
• Requires refactoring the code during migration. • Requires the most amount of time to develop Rebuild
• Demands extra effort and new skills for optimizing in the cloud. • Requires the most amount of learning
Migration Path Migration Path — Phase 1
There are four phases of your migration In the assessment phase, you gather information about the workloads
Assess. perform a thorough assessment and discovery of your Deploy. design, implement and execute a you want to migrate and their current runtime environment.
existing environment in order to understand your app and deployment process to move workloads to
environment inventory, identify app dependencies and Google Cloud. You might also have to refine Take inventory
requirements, perform total cost of ownership calculations, and your cloud infrastructure to deal with new Build a list all of your machines, hardware specifications, operating systems, and licenses
establish app performance benchmarks. needs. Catalog Apps
Build a catalog matrix to help you organize apps into categories based on their complexity and risk in moving to Google Cloud
Educate your organization about Google Cloud
train and certify your software and network engineers on how the cloud works and what Google Cloud products

Experiment and design proofs of concept


choosing a proof of concept (PoC) and implementing it

Calculate total cost of ownership (TCO)


Plan. create the basic cloud infrastructure for your Optimize. begin to take full advantage of cloud-native compare your costs on Google Cloud with the costs you have today
workloads to live in and plan how you will move apps. This technologies and capabilities to expand your business's Use the Google Calculator
planning includes identity management, organization and potential to things such as performance, scalability, disaster
project structure, networking, sorting your apps, and recovery, costs, training, as well as opening the doors to Choose which workloads to migrate first
developing a prioritized migration strategy. machine learning and artificial intelligence integrations for identify apps with features that make them likely first-movers
your app. 129 Starting with a less complex app lowers your initial risk because later you can apply your team's new knowledge to harder to migrate
130apps

Migration Path — Phase 2 Migration Path — Phase 3


In the plan phase, you provision and configure the cloud infrastructure and
services that will support your workloads on Google Cloud In the deploy phase, implement a deployment process and refine it during the migration
Establish user and service identities.
• Google Accounts — An account that usually belongs to an individual user that interacts with Google Cloud. Fully manual deployments
• Service Accounts — An account that usually belongs to an app or a service, rather than to a user. lets you quickly experiment with the platform and the tools, but it's also error prone, often not documented, and not repeatable.
• Google Groups — A named collection of Google accounts.
• Google Workspace domains — A virtual group of all the Google accounts that have been created in an organization's Configuration management tools (CM)
Google Workspace account. configure an environment in an automated, repeatable, and controlled way.
• Cloud Identity domains — These domains are like Google Workspace domains, but they don't have access to Google eg. Run remote commands on a Virtual Machine that check the state and remediate of an instance to the desired configuration/state
Workspace applications. Container orchestration
Design your resource organization. Consider using Kubernetes so you don't have to worry about the underlying infrastructure and the deployment logic
organize your resources using Google’s Resource Hierarchy • Google Kubernetes Engine (GKE)
• Organizations are the root of a resource hierarchy and represent a real organization, such as a company Deployment automation
• Folders are an additional layer of isolation between projects and can be seen as sub-organizations automate the deployment process by implementing a continuous integration and continuous delivery (CI/CD) pipeline
• Projects are the base-level organization entities and must be used to access other Google Cloud resources
• hierarchy architectures: Environment-oriented, Function-oriented, Granular access-oriented Infrastructure as code (IaC)
Write a script that defines resources to be created or updated in a single deployment action
Define groups and roles for resource access Share and stand-up entire workflows and environments easily
set up the groups and roles to grant the necessary access to resources IaC tools: Google Deployment Manager, Hashicrop Terraform
Design your network topology and establish connectivity.
set up the network topology and connectivity from your existing environment to Google Cloud
131 132
• Cloud VPN, Peering, Cloud Interconnect
Migration Path — Phase 4 Migrate to Virtual Machines
The Optimize phase, start optimizing your target environment Formerly Migrate for Compute Engine
Build and train your team
train your development and operation teams to take full advantage of the new cloud environment
Monitor everything Migrate to Virtual Machines service facilitates the transfer of VMs to Google Compute Engine, offering a
Monitoring is the key to ensure that everything in your environment is working as expected
suite of features to streamline the migration process:
• Prometheus, Google Cloud Logging, Google Cloud Monitoring
Automate everything
Manual operations are exposed to a high error risk and are also time consuming
• Enables continuous replication of disk data from source VMs to Google Cloud, supporting
Automation leads to cost and time savings, and reduces risk. an efficient migration workflow
• Google Cloud Composer (Apache Airflow), Spinnaker • Designed to minimize downtime, allowing for ongoing operations in the source
Codify everything environment while migration takes place
By implementing processes such as Infrastructure as Code and Policy as Code,
make your environment fully auditable and repeatable
• Provides the capability to quickly clone and test migrated VMs, ensuring they function
correctly in the new environment
Use managed services instead of self-managed ones
• Allows for the management and execution of all migration-related tasks directly within
• CloudSQL, AutoML, Google Kubernetes Engine (GKE), App Engine
the Google Cloud Console
Optimize for performance and scalability • Supports migration to advanced machine types such as C3, H3, and M3, which utilize
• Horizontal scaling — add or remove more machines for compute, storage or databases
• Vertical scaling — increase (resize) the underlying machine eg. vCPU and Mem for compute storage and databases
NVMe and gVNIC for improved performance
Reduce costs
133 134
take advantage of sustained use discounts (SUD), committed use contracts (CUC), Flat-Rate Pricing eg. BigQuery

Anthos Migrate for Anthos (and GKE)

Anthos is a modern application management platform used for managing hybrid architectures Migrate for Anthos and Google Kubernetes Engine (GKE) is a tool to move and automatically convert workloads
that span from Google Cloud to other AWS or on-premise datacenters running VMWare. directly into containers in Google Kubernetes Engine (GKE) and Anthos

Anthos is a single control plane to manage With Migrate for Anthos, you can migrate your VMs from supported source platforms to:
Kubernetes compute in hybrid scenarios • Google Kubernetes Engine (GKE)
• Anthos
• Anthos clusters on VMware
Core component of Anthos
• Anthos clusters on AWS
• Infrastructure, container, and cluster management
• Managed Service Mesh
• Multicluster management Use auto-generated container artifacts including container images, Dockerfiles, deployment
• Configuration management YAMLs and persistent data volumes to deploy migrated workloads and integrate with services such
• Migration as Anthos Service Mesh, Anthos Config Management, Stackdriver, and Cloud Build for
• Service management maintenance using CI/CD pipelines.
• Serverless
• Secure software supply chain
Migrate for Anthos is offered at no charge and no Anthos subscription is required when migrating to GKE.
• Logging and monitoring
Charges for other GCP services (e.g. compute, storage, network, etc.) still apply.
• Marketplace
135 136
Storage Transfer Service Transfer Appliance

Storage Transfer Service allows you to quickly import online data into Cloud Storage Transfer Appliance is a hardware appliance you can use to securely migrate large volumes of data
Migrate hundreds of terabytes up to 1 petabyte
Set up a repeating schedule for transferring data, as well as Comes in two types: Rackable (7 TB, 40 TB, 300 TB) and Freestanding (40 TB, 300 TB)
transfer data within Cloud Storage, from one bucket to another
enables you to:
• Move or backup data to a Cloud Storage bucket either from other cloud storage providers or from your on-premises storage. Rackable
• Move data from one Cloud Storage bucket to another, so that it is available to different groups of users or applications.
• Periodically move data as part of a data processing pipeline or analytical workflow. Freestanding

• Schedule one-time transfer operations or recurring transfer operations


• Delete existing objects in the destination bucket if they don't have a corresponding object in the source
• Option to delete data source objects after transferring them, or retain them based on user configuration You can mount Transfer Appliance as an NFS volume,
• Schedule periodic synchronization from a data source to a data sink with advanced filters based on file creation making it easy to drag and drop files, or rsync, from your current NAS to the appliance
dates, file names, and the times of day you prefer to import data
When to use Transfer Appliance:
• Your data size is greater than or equal to 10TB
137 • It would take more than one week to upload your data over the network 138

Transfer Appliance What is AI?

Security Features (safe to connect) Security features (safe in transit) What is Artificial Intelligence (AI)?
• Tamper resistant • AES 256 encryption Machines that perform jobs that mimic human behavior
• cannot be easily opened • Customer-managed encryption keys
• apply tamper-evident tags to the shipping case • NIST 800-88 compliant data erasure What is Machine Learning (ML)?
• Ruggedized Machines that get better at a task without explicit programming
• Trusted Platform Module (TPM) chip What is Deep Learning (DL)?
• immutable root filesystem and software components haven't been tampered with Machines that have an artificial neural network inspired by
• Hardware attestation the human brain to solve complex problems.
• validate the appliance before you can connect it to your device and copy data to it
What is GenAI?
Generative AI is a specialized subset of AI that
Performance Features:
generates out content eg: Image, Video, Text, Audio
• All SSD drives — no moving parts, very fast IOPs
• Multiple network connectivity options — 10Gbps or 40Gbps transfer speed
• Scalability with multiple appliances — use multiple appliance to increase transfer speed
• Globally distributed processing — ships quickly to and from the the datacenter to Google Cloud
• Minimal software — use common software already on your Linux or Mac, Windows system

139
AI vs Machine Learning AI/ML vs Data Analytics/BI

Artificial Intelligence (AI) Machine Learning (ML) AI/ML Data Analytics / Business Intelligence (BI)

Functionality AI focuses on understanding and decision-making ML learns from data to make predictions or Functionality AI/ML uses models to predict outcomes and Data Analytics/BI analyzes historical data for
decisions. automate decision-making. insights.

Data Handling AI analyzes and makes decisions based on existing ML uses data to train models and make Data Handling AI/ML learns from large datasets to make Data Analytics/BI processes and visualizes existing
data predictions. predictions and automate tasks. data to find patterns.

Applications Spans across various sectors, including data Used in recommendation systems, fraud Applications Used in automation, predictive analytics, Used in reporting, dashboards, and decision-
analysis, automation, natural language processing, detection, and predictive analytics. personalization, and innovation. making based on past trends.
and healthcare.

Outcome AI/ML creates automated processes and Data Analytics/BI provides descriptive and
continuous learning from data. diagnostic insights for decision-making.

Supervised vs Unsupervised vs Reinforcement Supervised Learning Models


What is Classification?
Supervised Learning (SL) When the labels are known, and you want a precise outcome.
When you need a specific value returned Classification is a process of finding a function to divide a dataset into classes/categories
Data that has been labeled for training
Task-driven – make a prediction eg. Classification, Regression Classification Algorithms
• Logistic Regression
• K-Nearest Neighbours
Unsupervised Learning (UL) When the labels are not known and the outcome does • Support Vector Machines
Data has not been labeled, the ML model needs to not need to be precise. • Kernel SVM
do its own labeling When you’re trying to make sense of data. • Naïve Bayes
Data-driven — recognize a structure or pattern eg. Clustering, Dimensionality Reduction, Association • Decision Tree Classification
What is Regression? • Random Forest Classification

Regression is a process of finding a function to correlate a dataset into continuous variable/number.


Regression Algorithms
Reinforcement Learning (RI)
• Simple Linear Regression
There is no data, there is an environment, and an ML model
• Multiple Linear Regression
generates data any many attempt to reach a goal
• Polynomial Regression
Decisions-driven — Game AI, Learning Tasks, Robot
• Support Vector Regression
Navigation
• Decision Tree Regression
• Random Forest Regression
Unsupervised Learning Models What is an ML Model?
What is a model? (in general terms)
What is Clustering? In general, a model is an informative representation of an object, person or system.
Clustering Algorithms
Clustering is a process grouping unlabeled data based on similarities Models can be:
• K-Means
and differences. • concrete — have a physical form (eg. A design for vehicle, a person posing for a painting)
• DBScan
• abstract — expressed as behavioural patterns (eg. mathematical, computer code, written words)
• K-Modes
What is machine learning modeling (ML model)?
An ML model is function that takes in data, performs an ML algorithm to produce a prediction.
The ML model is trained. Not to be confused with the training model, which is learning to make correct predictions.
What is Association? Association is the process of finding a relationship between A ML model can be the training model that is just deployed once all it has been tuned to make good predictions
variables through association . Association Algorithms
• Apriori <Inference> (Unlabeled Data)
If someone buys • Euclat
bread, suggest • FP-Growth
butter.
Dimensionality Reduction Algorithms Training Data Learning Algorithm ML Model
What is Dimensionality Reduction? • Principal Component Analysis (PCA) (labeled data) (training model) Deployed (trained model)
Dimensionality Reduction is a process of reducing the amount of • Linear Discriminant Analysis (LDA)
• Generalized Discriminant Analysis (GDA)
data while retaining data integrity. Often used as a pre- • Singular Value Decomposition (SVD)
processing stage. • Latent Dirichlet allocation (LDA)
• Latent Semantic Analysis (LSA, pLSA, GLSA) [Prediction]
• t-SNE Hyper-tuning

What is an Algorithm and Function? What is a Feature?


What is an Algorithm? What is a Feature?
A set of mathematical or computer instructions to perform a specific task. A feature is a characteristic extracted from our unstructured dataset has been prepared
An algorithm can be composed of several smaller algorithms. to be ingested by our ML model to infer a prediction.
How do you do something
k-Nearest Neighbors (K-NN) algorithm can be used ML models generally only accept numerical data, and so we prepare our data into a
to create a Supervised Classification Machine machine-readable format by encoding (we’ll revisit encoding later in detail)
Learning algorithm.
Tell me who are my closets neighbours and we will
infer that I be considered of the same class.
What is Feature Engineering?
Feature Engineering is the process of extracting features from our provided data sources.
Within K-NN you can use different distance metrics aka
(algorithms): eg. Euclidean, Hamming, Minkowski,
Manhattan Data Source 1

What is a function? Raw


Features ML Model
A function is way of grouping algorithms together so you can Data
call them to compute a result. sounds like a ML model Data Source 2 Clean and
Transform
K-NN itself is not ML, but when applied to solve an ML problem makes it a ML Algorithm.
What is Inference? What is Training?
What is Inference?
Inference is the act of requesting and getting a prediction What is Training?
How Inference Works in ML: Training is the process of teaching a machine learning model to recognize patterns by
1. Input data feeding it data, so it can make predictions or decisions based on new, unseen data.
2. Processed by a Machine Learning Model that has been deployed for production use
3. Outputs a prediction.

Tell me what this is Class: Yellow Banana


Machine Learning Confidence: 0.9%
Model

Inference textbook definition


Inference refers to steps in reasoning, moving from premises to logical consequences

What are Parameters and Hyperparameters? AI and ML Services


What is a Model Parameter? Vertex AI is Google Cloud’s unified ML platform for building ML solutions end-to-end
A variable that configures the internal state of a model and who’s value can be estimated
Data Feature Training / Model Understanding / Edge Model Model
The value of the parameter is not manually set and will be learned outputted after training Engineering
Readiness HP-Tuning Serving Tuning Monitoring Management
Parameters are used to make predictions
Auto ML
Vision Video Language Translation Tables

Data Labeling Feature Store Training Prediction Hybrid AI Continuous Metadata


Monitoring
Datasets Vizier Explainable AI
Optimization
What is a Model Hyperparameter?
Experiments
A variable that is external to the model and who’s value cannot be estimated
The value of the hyperparameter is manually set before the training of the model AI Accelerators

Hyperparameters to estimate the model’s parameters Pipelines (Orchestration)


• learning_rate
• epochs Deep Learning Environment (DL VM, DL Containers)
• batch_size Notebooks 152
TensorFlow AI and ML Services
Vertex AI is the unification of AI Platform and the addition of AutoML
TensorFlow is a low-level deep learning machine learning framework created by Google Brain Team
TensorFlow is written in Python, C++ and CUDA, there are APIs to allow you to use various other languages. To offer an end-to-end solution for all your custom ML and DL needs.
What is a tensor? AI Platform (deprecated)
A Tensor is a multi-dimensional array eg. Ts.Tensor, similar to NumPy ndarray objects • Preparing a dataset for supervised training with Data Labeling
• Notebooks to write and document building ML models
tf.Tensors can reside in accelerator memory (like a GPU)
• A Model registry to hold all your trained models
• Pipelines for setting up automated CI/CD to rapidly deploy new changes (known as MLOps)
Google created their own hardware called
Tensor Processing Units (TPU) specifically
Optimized for TensorFlow and the tensor AutoML
Easily train high-quality, custom ML models.
data structure
You upload your data, choose what you want
You write TensorFlow in Python to predict and it does the rest!

Example of a ML model in TensorFlow AutoML Tables


(technically using keras) Build and deploy machine learning models
on structured data.
TensorFlow Enterprise
Accelerate and scale ML workflows on the cloud with compatibility-tested
and optimized TensorFlow along with enterprise-ready services and
153 support 154

ML/DL Environment ML/DL Environment


To prepare, train, tune, predict for Machine Learning models you need to use compute Notebooks
A Web-based application for authoring documents that combine:
optimized and specialized for ML and DL tasks. • live-code
• narrative text
A ML compute solution will be: • equations
• Prepackaged with specific ML framework and data-science libraries • visualizations
• either CPU or GPU (GPUs being more powerful, very expensive and suited for DL)
A Notebook makes it easy to code all the steps to an ML solution
Deep Learning VM Images while intermixing documentation. It makes it easy to rerun
Deploy VM images that are optimized for data segments of code for a fast and iterative developer experience.
science and ML tasks.
Deep Learning Containers Vertex AI’s Notebooks are powered by JupyterLab IDE.
Take advantage of preconfigured and optimized containers Jupyter is the industry standard for interactive
for deep learning environments. notebooks for building ML models or for data analysis
Cloud GPUs
Add GPUs to your workloads for
machine learning, scientific computing,
and 3D visualization.
A container for a Notebook instance to
run TensorFlow 2.5 Enterprise 155 156
AI Services Conversational AI
AI is when machines mimic human-behaviour or can perform human tasks. Conversational AI is technology that can participate in conversations with humans.
AI leverages ML and DL and generally AI refers to fully-managed ML SaaS offering • Chatbots Use Cases
• Voice Assistants • Online Customer Support — replaces human agents for replying to customer FAQs, shipping
• Interactive Voice Recognition Systems (IVRS) • Accessibility — voice operated UI for those who are visually impaired
Video AI • HR processes — employee training, onboarding, updating employee information
Vision AI • Health Care — accessible and affordable health care eg. claim processes
Derive insights from images, text, and more Enable powerful content discovery and
engaging video experiences. • Internet of Things (IoT) — Amazon Alexa, Apple Siri and Google Home
using custom or pretrained models. • Computer Software — autocomplete search on phone or desktop
Natural Language API Recommendations AI
Provide a catalog of records, will make suggest
Agent Assist
Derive insights from unstructured text. Empower human agents with continuous support during calls by
recommendations to users. eg. Retail Product suggestions
(part of Retail AI) identifying intent and providing real-time, step-by-step assistance.
Translation
Dynamically translate between languages. Talent Solution Dialogflow
the capability to create, read, update, and delete job postings Build engaging voice and text-based conversational interfaces.
Document AI
Uses Natural Language Processing (NLP) to train • Dialogflow CX — Provides an advanced agent type suitable for large or very complex agents.
and simulate human review of documents • Dialogflow ES — Provides the standard agent type suitable for small and simple agents.

Text-to-Speech Speech-to-Text
157 Convert text to natural-sounding speech using ML. Convert speech to text using the power 158
of ML.

BigQuery ML BigQuery ML Key Features

BigQuery ML lets you build machine learning models directly in BigQuery using SQL, User-friendly: Empowers SQL users to build ML models without Python or Java.
making ML accessible to data analysts without coding expertise. Faster Development: ML models are created within BigQuery, avoiding the need for data movement.
Multiple Interfaces: Accessible via Google Cloud Console, BigQuery API, and more.
Vertex AI Integration: Manage and deploy BigQuery ML models with Vertex AI.
Types of models BigQuery ML can create:
• Linear Regression Supported Models: Offers built-in models like linear regression and integrates externally trained models from Vertex AI
• Logistic Regression Pretrained Models: Import models from ONNX, TensorFlow, and others for prediction.
• K-means Clustering
• Time Series Forecasting (ARIMA/Auto-ARIMA)
• Deep Neural Networks (DNN)
• Matrix Factorization
• Principal Component Analysis (PCA)
• And more

These models can be integrated with Google Cloud's Vertex AI


for further customization and optimization
BigQuery ML Workflow Responsible AI

Streaming/batch What is Responsible AI?


data A set of broad guidelines to ensure AI is used responsibility
Be socially beneficial: Ensure AI benefits society across multiple industries like healthcare,
security, and more.
Big Query Train ML model Deploy ML Avoid creating or reinforcing unfair bias: Prevent AI from amplifying biases, especially
Data processing Export data
(data warehouse) (e.g., Python/R model related to sensitive traits.
Be built and tested for safety: Build and test AI to avoid risks and ensure security.
Be accountable to people: Keep AI under human oversight and control.
BigQuery ML Workflow
Incorporate privacy design principles: Design AI with privacy safeguards and user control
over data.
Solved with BigQuery ML Uphold high standards of scientific excellence: Drive innovation and maintain high
Issues
Data Governance: How do you control and secure data? Built-in Governance: Data and ML are in one place, scientific standards.
Managing Resources: Who handles the infrastructure? easier to control. Be made available for ethical use: Limit harmful uses and ensure AI aligns with ethical
Deployment Complexity: Where do you run the model? Automatic Resource Handling: No need to manage guidelines.
infrastructure.
Simplified Deployment: Models run directly in
BigQuery, no extra setup needed. What is Explainable AI?
A method to try and reason how a model has made decisions.

Considerations when Selecting Google Cloud AI/ML Solutions Common Threats


What is a Threat?
Speed A threat in cloud security is a potential negative action or event facilitated by a
• How fast do you need the model in production? vulnerability that results in an unwanted impact to a computer system or application
• Pre-trained APIs are fast; custom models take time.
Google wants you to know the following 5 types of threats
Effort
• Effort depends on problem complexity, data readiness, and team skills. Phishing Attacks Ransomware
• Solutions can range from quick implementation to longer custom development. Deceptive emails used to steal credentials or A type of malicious software (malware) that when
sensitive data. installed holds data, workstation or a network hostage
Differentiation until the ransom has been paid.
• How unique does your model need to be?
• Simple needs fit off-the-shelf models; Vertex AI allows customization for unique cases.
DDoS Attacks Cloud Misconfiguration
Required Expertise Servers are overloaded with traffic to disrupt Poorly configured cloud environments expose
• Assess your team’s expertise and needs. services, leading to downtime and loss of revenue. vulnerabilities that can lead to data leaks.
• Advanced AI projects may need specialized engineers or training (E.g., machine learning engineers,
data engineers, etc.)
Insider Threats
Internal individuals misuse their access, leading to data
theft or other security breaches.
Cloud security vs on-premises security Identity and Access

Cloud Security Traditional on-premises security IAM


Establish fine-grained identity and access Managed Service for Microsoft Active Directory
Location Data and apps are hosted in off-site data centers Data and apps are hosted on an organization's own management for Google Cloud resources. Use a highly available, hardened service running Microsoft Active
managed by cloud providers. servers, giving full control over infrastructure. Directory (AD).
Cloud Identity
Easily manage user identities, devices, and
Responsibility Providers secure the infrastructure; customers Organizations secure the entire stack, from applications from one console. Resource Manager
secure data, apps, and access. hardware to data. Hierarchically manage resources on Google Cloud.
Identity Platform
Add Google-grade identity and access Security key enforcement
Scalability Easily scalable to match demand. Scaling requires additional infrastructure, which Enforce the use of security keys to help prevent
management to your apps.
can be time-consuming and costly. account takeovers.
BeyondCorp Enterprise
A zero-trust solution that enables secure access with Titan Security Keys
Maintenance and Providers handle updates and maintenance. Organizations manage updates, patches, and integrated threat and data protection. Defend against account takeovers from phishing attacks.
updates hardware. Security Keys made by Google.
Identity-Aware Proxy
Use identity and context to guard access to your
Cost model OpEx model, pay-as-you-go. CapEx model, significant upfront investments. applications and VMs.

166

Benefits of Google’s Data Centers Security


Access Transparency
Enhanced Security Get visibility over your cloud provider through near real- Cloud Key Management Service
Zero-trust architecture with custom hardware and strong physical time logs. Manage encryption keys on Google Cloud.
security, including biometric access controls.
Binary Authorization
Deploy only trusted containers on Kubernetes Engine. Security Command Center
Operational Efficiency
Understand your security and data attack surface.
Purpose-built servers reduce energy use and improve efficiency, like
the Hamina data center with sea water cooling. Cloud Asset Inventory
View, monitor, and analyze Google Cloud and Anthos
Scalability assets across projects and services.
Shielded VMs
Quickly add hardware to meet growing demands without disrupting
Cloud Audit Logs Deploy hardened virtual machines on Google Cloud.
services.
Gain visibility into who did what, when, and where for all
user activity on Google Cloud. VPC Service Controls
Customization
Protect sensitive data in Google Cloud services using
Complete control over servers allows unique features and innovations.
Google data center Cloud Data Loss Prevention security perimeters.
Cost & Environmental Savings Discover and redact sensitive data.
Incident Response and Management
Lower costs and environmental impact through efficient design and
Cloud HSM Improve your incident median time to mitigation.
renewable energy.
Protect cryptographic keys with a fully
managed hardware security module service. 168
User Protection Services Secure-By-Design Infrastructure
Operational and device security Service deployment
• develop and deploy infrastructure software using rigorous • Any application that runs on our infrastructure is deployed with
security practices. security in mind.
• operations teams detect and respond to threats to the • We don't assume any trust between services, and we use multiple
Phishing Protection infrastructure from both insiders and external actors, mechanisms to establish and maintain trust.
Help protect your users from phishing sites. 24/7/365. • infrastructure was designed to be multi-tenant from the start.

Internet communication Hardware infrastructure From the physical premises to the purpose-
reCAPTCHA Enterprise
• Communications over the internet to our public cloud services built servers, networking equipment, and custom security chips to the
Help protect your website from fraudulent low-level software stack running on every machine, our entire
are encrypted in transit.
activity, spam, and abuse. • network and infrastructure have multiple layers of protection hardware infrastructure is Google-controlled, -secured, and -hardened.
to defend our customers against denial-of-service attacks.
Web Risk Data centers Google data centers feature layered security with custom-
Detect malicious URLs on your website and in Identity designed electronic access cards, alarms, vehicle access barriers,
client applications. • Identities, users, and services are strongly authenticated. perimeter fencing, metal detectors, biometrics, and laser beam
• Access to sensitive data is protected by advanced tools like intrusion detection. They are monitored 24/7 by high-resolution
phishing-resistant security keys. cameras that can detect and track intruders. Only approved employees
with specific roles may enter.
Storage services
• Data stored on our infrastructure is automatically encrypted at
Continuous availability Infrastructure underpins how Google Cloud
rest and distributed for availability and reliability.
delivers services that meet our high standards for performance,
• guards against unauthorized access and service interruptions.
resilience, availability, correctness, and security. Design, operation, and
169 170
delivery all play a role in making services continuously available.

Compliance Reports Manager Google Cloud Compliance


Compliance Reports Manager provides you with easy, on-demand access to International Organization for Standardization (ISO) / International Electrotechnical Commission
critical compliance resources, at no additional cost. ISO/IEC 27001 — control implementation guidance
ISO/IEC 27017 — enhanced focus on cloud security
ISO/IEC 27018 — protection of personal data in the cloud. eg. PII
ISO/IEC 27701 — Privacy Information Management System (PIMS) framework
• outlines controls and processes to manage data privacy and protect PII.

System and Organization Controls (SOC)


SOC 1 — 18 standard and report on the effectiveness of internal controls (SSAE) at a service organization
• relevant to their client’s internal control over financial reporting (ICFR).
SOC 2 — evaluates internal controls, policies, and procedures that directly relate to the security of a system
at a service organization
SOC 3 — A report based on the Trust Services Criteria that can be freely distributed

Downloadable PDFs that prove that GCP Payment Card Industry Data Security Standard (PCI DSS)
is compliant with various compliance and a set of security standards designed to ensure that ALL companies that accept, process, store or
security standards transmit credit card information maintain a secure environment.

Federal Information Processing Standard (FIPS) 140-2


US and Canadian government standard that specifies the security requirements for
cryptographic modules that protect sensitive information.
171 172
Google Cloud Compliance Google Cloud Compliance

Personal Health Information Protection Act (PHIPA) Federal Risk and Authorization Management Program (FedRAMP)
An Ontario provincial law (Canada) that regulates patient Protected Health Information US government standardized approach to security authorizations
for Cloud Service Offerings

Health Insurance Portability and Accountability Act (HIPAA). Criminal Justice Information Services (CJIS)
US federal law that regulates patient Protected Health Information Any US state or local agency that wants to access the FBI's CJIS database is
required to adhere to the CJIS Security Policy.

General Data Protection Regulation (GDPR)


Cloud Security Alliance (CSA) STAR Certification A European privacy law. Imposes new rules on companies, government agencies,
Independent third-party assessment of a cloud provider's security posture non-profits, and other organizations that offer goods and services to people in the
European Union (EU), or that collect and analyze data tied to EU residents.

173 174

GCP — Privacy GCP — Transparency


Google Cloud Enterprise Privacy Commitments describe how we protect the
privacy of Google Cloud Platform and Google Workspace customers
1. You control your data
Customer data is your data, not Google’s. We only process your data according to your agreement(s). Google’s Trust Principles:
1. You own your data, not Google
2. We never use your data for ads targeting
We do not process your customer data to create ads profiles or improve Google Ads products. 2. Google does not sell customer data to third parties
3. Google Cloud does not use customer data for advertising
3. We are transparent about data collection and use
We’re committed to transparency, compliance with regulations like the GDPR, and privacy best practices. 4. All customer data is encrypted by default
5. We guard against insider access to your data
4. We never sell customer data or service data
We never sell customer data or service data to third parties.
6. We never give any government entity "backdoor" access
7. Our privacy practices are audited against international standards
5. Security and privacy are primary design criteria for all of our products
Prioritizing the privacy of our customers means protecting the data you trust us with. We build the strongest
security technologies into our products.

Google provides resources on privacy regulations such as the LGPD, GDPR, CCPA, the
Australian Privacy Act, My Number Act, and PIPEDA, among others.
175 176
Cloud Armor Cloud Armor

Cloud Armor is a DDOS protection and Web Application Firewall (WAF) service

What is a DDoS (Distributed Denial of Service) Attack?


• IP-based and geo-based access control
A malicious attempt to disrupt normal traffic by flooding a website with large amounts of fake traffic.
• Support for hybrid and multicloud deployments
• Adaptive protection
• Detect and mitigate attacks against your Cloud Load Balancing workloads
• Pre-defined WAF rules to mitigate OWASP Top 10 risks
• Named IP Lists
• Rich rules language for web application firewall
Cloud Armor
Victim • Visibility and monitoring
Attacker
GCP Network Cloud Armor has two tiers:
• Standard Pay-As-You-Go (PAYG)
• Managed Protection Plus Starting at $3,000/month

177 178

Private Catalog Security Command Center


Security Command Center is a centralized security and risk
Private Cloud allows you to package Google cloud resources into a service offering that can be than made available management platform for your google cloud resources.
and discoverable in a catalog internally to your organization to quickly deploy governed stacks and workloads

Asset discovery and inventory


• inventory and historical information
about your google cloud resources
Threat detection
• audits your cloud resources for
security vulnerability
Threat prevention
• fix security misconfiguration with
single-click remediation

179 180
Google Cloud Data Loss Prevention BeyondCorp

Cloud Data Loss Prevention (DLP) detect and protect sensitive The Zero Trust model operates on the principle of “trust no one, verify everything.”
information within GCP storage repositories Malicious actors being able to by-pass conventional access controls
What is Personally identifiable information (PII) demonstrates traditional security measures are no long sufficient
any data that can be used to identify a specific individual:
• birthday, government ID, full name, email address, mailing address etc.. BeyondCorp is Google's implementation of the zero trust model
What is Personally/Protected Health information (PHI)
any data that can be used to identify health information about a patient BeyondCorp allows for: By shifting access controls from the network perimeter to
• single sign-on individual users, BeyondCorp enables secure work from
• access control policies virtually any location without the need for a traditional VPN.
• Provides tools to classify, mask, tokenize, and transform sensitive data • access proxy
• support for structured and unstructured data • user-based authentication The BeyondCorp principles:
• Create dashboards and audit reports • device-based authentication • Access to services must not be determined by the network from which you connect
• Automate tagging, remediation, or policy based on findings • authorization • Access to services is granted based on contextual factors from the user and their device
• Connect DLP results into Security Command Center, Data Catalog
• Access to services must be authenticated, authorized, and encrypted
• or export to your own Security Information and Event Management (SIEM) or governance tool
• Schedule inspection jobs directly in the console UI
• over 120 built-in Information Types (infoTypes)
• info types define what sensitive information to scan 181 182

BeyondCorp Access Context Manager


A Zero Trust model puts identity as the primary security perimeter to be protected. Access Context Manager allows Google Cloud organization admins to
BeyondCorp itself is just a collection of identity, access and security services to meet Zero Trust model requirements define fine-grained, attribute based access control for projects and
resources in Google Cloud.
Cloud Identity
Access Context Manager keeps mobile workforces utilizing
Apps and Data
Bring-Your-Own-Devices (BYOD) secure.
IP, Location Access Context Web apps You create an access policy and to determine
Session Age, Manager
User Trust Time what level of access based on attributes such as:
Virtual Machines • Device Type Set an access level to high
(Identity + Behavior)
• Operating System for a specific subnet in for
SaaS Applications • IP Address a specific region

Global Frontend Rules Enforcement • User Identity


Infrastructure
(Context, Location and Time) Engine Point
Cloud IAP APIs
Device Trust
(Identity + Posture) Google’s Frontend Cloud IAM
Cloud Identity
Endpoint Verification VPC Service Controls

183 184
VPC Service Controls Cloud Identity-Aware Proxy (IAP)
VPC Service Controls allows you to create a service perimeter Cloud Identity-Aware Proxy (IAP) lets you establish a central authorization layer for applications accessed by
HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls
VPC Service Perimeters function like a firewall for GCP APIs
You can define access policies centrally and apply them to all of your applications and resources.
Apply access levels Use IAP when you want to enforce access control policies for applications and resources.

Identity-Aware Proxy (IAP) lets you manage who When IAP turned
has access to services hosted on App Engine, on, in side-panel add
Compute Engine, or an HTTPS Load Balancer. members and their roles
Access policies are automatically created for you when you create an access level, service perimeter or turn on IAP.
They cannot be directly managed by the customer. 185 186

BeyondCorp Enterprise SecOps


BeyondCorp Enterprise is a zero trust model platform SecOps (Security Operations) is the practice of protecting your data and systems in the cloud
BeyondCorp Enterprise enabled through Chrome Browser Cloud Management you can protect against by integrating security and operations to reduce risks and improve response to threats.
threats such as malware and phishing for your Chrome users as they download and upload files
BeyondCorp Enterprise is built into the Chrome Browser with no agents required Key SecOps Activities
• Vulnerability Management: Identifying and fixing security vulnerabilities
(Google Cloud Security Command Center).
• Log Management: Collecting and analyzing security logs for detecting threats
(Google Cloud Logging).
• Incident Response: Swift response to security incidents by expert teams.
• Security Awareness Training: Educating employees to minimize security risks and prevent incidents.

Identity and context-aware access control Easy adoption with our agentless approach Business Benefits of SecOps
• policies based on: user identity, device health, contextual factors • non-disruptive overlay to your existing architecture
Integrated threat and data protection • no need to install additional agents
• Reduced Risk of Data Breaches: Fix vulnerabilities to lower data breach risks.
• Prevent data loss, stop common threats Rely on Google Cloud’s global infrastructure • Increased Uptime: Swift response minimizes downtime and keeps services running.
• Real-time alerts and detailed reporting • scale, reliability, and security of Google's network
• Improved Compliance: Helps meet security regulations like GDPR (General Data Protection Regulation).
Support your environment: cloud, on-premises, or hybrid • 144 edge locations in over 200 countries and territories
• Access SaaS apps, web apps, and cloud resources wherever 187 • Enhanced Employee Productivity: Training reduces human error and promotes a secure work environment.
Data Sovereignty and Data Residency Directory Service
What is a directory service? Client
A directory service maps the names of network resources to their network
Data sovereignty ensures that data is subject to the laws of the country addresses. Client Client
where it is stored, protecting individual rights, such as GDPR in the EU.
A directory service is shared information infrastructure for locating,
managing, administering and organizing resources:
Data residency refers to the physical location of data storage, with some • Volumes
countries mandating that data be stored within their borders for compliance. • Folders
• Files Directory Service
• Printers
• Users Well known directory services:
How Google Cloud Helps Control Data Location • Groups • Domain Name Service (DNS)
• Google Cloud offers options to select regions for data storage, ensuring compliance with • Devices • the directory service for the internet
local regulations (e.g., EU-based regions). • Telephone numbers • Microsoft Entra ID
• other objects • Formerly Azure Active Directory
• Features like VPC Service Controls and Google Cloud Armor restrict data access and
A directory service is a critical component of a network operating system • Apache Directory Server
traffic location, ensuring compliance with data residency and sovereignty requirements. A directory server (name server) is a server which provides a directory service • Oracle Internet Directory (OID)
Each resource on the network is considered an object by the • OpenLDAP
directory server. Information about a particular resource is stored as • Cloud Identity
a collection of attributes associated with that resource or object • JumpCloud 190

Cloud Identity Cloud Identity — Versions


Cloud Identity is an Identity as a Service (IDaaS) that centrally manages users and groups. Cloud Identity comes in two version Free and Premium
Device Management Directory Security
Free Free Free
• Basic Mobile Management • Basic directory management • User security management
• Device inventory • Organizational units and groups (Unlimited) • Self-service password recovery
• Basic passcode enforcement • Admin managed groups • 2-Step verification (2SV) including security key management
• Remote account wipe • Groups for Business • 2SV enforcement controls
• Android • Google Cloud Directory Sync • with security key enforcement and management
• Apple® iOS® • Admin roles and privileges • Password management and strength alert
Premium • Google Admin App for Android Premium
• Advanced Mobile Management • Google Admin App for iOS • First-party session management
• Advanced passcode enforcement • Admin SDK/API • Google security center
• Security policies • Secure LDAP Reporting
• Application management Premium • Free
• Network management • User lifecycle management (no user cap) • Admin, Login, SAML, Groups, Token audit logs
• Remote device wipe • Secure LDAP • Security reports
• Reporting • SAML audit log
Single sign-on (SSO) and automated provisioning
• Application auditing • App reports
Free
• Company-owned devices • Account activity reports
federate identities between: • manage access and compliance across all users in your domain • Set up SSO using Google as an identity provider (IdP) to access a pre-
• Mobile audit Premium
integrated list of third-party SAML apps (Unlimited)
• Google Cloud • create a Cloud Identity account for each of your users and groups. • MDM rules
• Set up SSO using Google as an IdP to access custom SAML apps • Devices audit log
• Active Directory • then you can use Identity and Access Management (IAM) to manage access to • Set up SSO using a third-party IdP with Google as a service provider • Auto export audit logs to BigQuery
• Microsoft Entra ID Google Cloud resources for each Cloud Identity account Premium
Service Level Agreements
• Automated user provisioning
• and more… 191 Premium has 99.9% 192
Active Directory Active Directory Domain Services
Active Directory Domain Services (AD DS)
Microsoft introduced Active Directory Domain Services in Windows 2000 to give Active Directory Services consist of multiple directory services
organizations the ability to manage multiple on-premises infrastructure components
and systems using a single identity per user. Domain Services
the foundation stone of every Windows domain network
stores information about members of the domain including devices and
users, verifies their credentials and defines their access rights.
The server running this service is called a domain controller.
Forrest
Domain Active Directory Lightweight Directory Services (AD LDS) Active Directory Certificate Services (AD CS)
an implementation of LDAP protocol for AD DS establishes an on-premises public key infrastructure.
OU create, validate and revoke public key certificates for internal uses of an
Tree Domain
Child Domain Child Domain
OU OU Organization
Active Directory Federation Services (AD FS) Active Directory Rights Management Services (AD RMS)
Tree Unit a single sign-on service so users may use several server software for information rights management
Child Domain Child Domain web-based services network resources using only shipped with Windows Server.
OU OU OU one set of credentials stored at a central location uses encryption and a form of selective functionality
denial for limiting access to documents

193 194

Active Directory Terminology Managed Service for Microsoft Active Directory


Domain
Managed Service for Microsoft Active Directory (AD) is an
A domain is an area of a network organized by a single authentication database
Active Directory hosted on the Google Cloud Platform
An Active Directory domain is a logical grouping of AD objects on a network
Domain Controller (DC) Compatibility with AD-dependent apps
A domain controller is a server that authenticates user identities and authorizes their access to resources. • runs real Microsoft AD Domain Controllers
• use standard Active Directory features:
Domain Computer • eg. Group Policy, Remote Server Administration Tools (RSAT)
A computer that is registered with a central authentication database A domain computer would be an AD Object Virtually maintenance-free
• highly available
AD Object • automatically patched
An AD Object is the basic element of Active Directory such as: • configured with secure defaults
Users, Groups, Printers, Computers, Shared folders • protected by appropriate network firewall rules
Seamless multi-region deployment
Group Policy Object (GPO) • simply expand the service to additional regions while continuing
A virtual collection of policy settings. It controls what AD Objects have access to to use the same managed AD domain
Hybrid identity support
Organization Units (OU)
• connect your on-premises AD domain to Google Cloud
A subdivision within an Active Directory into which you can place users, groups, computers, and other organizational units • deploy a standalone domain for your cloud-based workloads
Directory Service
A directory service, such as Active Directory Domain Services (AD DS), provides the methods for storing directory data
and making this data available to network users and administrators. A Directory service runs on a Domain Controller
195 196
Identity Providers (IpD) Single-Sign-On
Identity Provider (IdP) a system entity that creates, maintains, and manages identity information for
Single sign-on (SSO) is an authentication scheme that allows a user to log in with a
principals and also provides authentication services to applications within a federation or distributed network.
A trusted provider of your user identity that lets you use authenticate to access other services. single ID and password to different systems and software.
Identity Providers could be: Facebook, Amazon, Google, Twitter, Github, LinkedIn
SSO allows IT departments to administrator a single identity
Federated identity is a method of linking a user's identity across multiple separate identity management systems
that can access many machines and cloud services.

OpenID
open standard and decentralized authentication protocol. Eg be able to login into a different social
Azure Active Directory
media platform using a Google or Facebook account
OpenID is about providing who are you

OAuth2.0 SAML SSO


industry-standard protocol for authorization OAuth doesn’t share password data but instead uses
authorization tokens to prove an identity between consumers and service providers.
Oauth is about granting access to functionality
Login for SSO is seamless, where a user once a user is logged
SAML into to their primary directory, as soon as they utilize this
Security Assertion Markup Language is an open standard for exchanging authentication and authorization software they are presented with a login screen
between an identity provider and a service provider.
An important use case for SAML is Single-Sign-On via web browser. 197 198

LDAP Google Cloud Directory Sync


Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry Google Cloud Directory Sync enables administrators to synchronize users, groups and other data from an
standard application protocol for accessing and maintaining distributed directory Active Directory/LDAP service to their Managed Service for Microsoft Active Directory within Google
information services over an Internet Protocol (IP) network.
A common use of LDAP is to provide a central place to store usernames and passwords

LDAP enables for same-sign on. Same sign-on allows users to single ID and password,
but they have to enter it in every time they want to login.

Why use LDAP when SSO is more convenient?

Most SSO systems are using LDAP.


LDAP was not designed natively to work with
web-applications.
On-Premise LDAP
Some systems only support integration with
Active Directory Directory
LDAP and not SSO

199 200
Service Level Agreements GCP — Service Level Agreements
What is a Service Level Agreement (SLA)? Compute Engine Cloud Storage
A SLA is a formal commitment about the expected level of service between a customer and provider.
When a service level is not met and if Customer meets its obligations under the SLA, Customer will be eligible to Covered Service Monthly Uptime Covered Service Monthly Uptime
receive the compensation eg. Financial or Service Credits Instances in Multiple Zones >= 99.99% Standard storage class in a multi-region or dual-region >= 99.95%
What is a Service Level Indicator (SLI)? A Single Instance >= 99.5% location of Cloud Storage
A metric/measurement that indicates what measure of performance a customer is receiving at a given time Load balancing >= 99.99% Standard storage class in a regional location of Cloud >= 99.9%
A SLI metric could be uptime, performance, availability, throughput, latency, error rate, durability, correctness Storage; Nearline or Coldline storage class in a multi-
region or dual-region location of Cloud Storage
What is a Service Level Objective (SLO)? Nearline or Coldline storage class in a regional location >= 99.0%
The objective that the provider has agreed to meet Availability SLA of 99.99% in a Cloud SQL, Cloud Functions of Cloud Storage; Durable Reduced Availability storage
SLOs are represented as a specific target percentage over a period of time. period of 3 months Monthly Uptime Percentage to class in any location of Cloud Storage
Customer of at least 99.95%

Target percentages BigQuery, App Engine Cloud NAT


• 99.95% Monthly Uptime Percentage to Monthly Uptime Percentage to Customer of at least 99.9%
• 99.99% Customer of at least 99.99%
• 99.999999999% (commonly called Nine nines)
AI Platform Training and Prediction
• 99.99999999999% (commonly called Nine elevens)
Monthly Uptime Percentage to Customer of at least 99.5%
201 202

GCP — Service Level Agreements GCP Support Plans


Cloud BigTable Basic Support Standard Support Enhanced Support Premium Support
Covered Service Monthly Uptime Cloud Spanner Unlimited access to support

Billing Support: Case (Email), Phone, and Chat


Cloud Bigtable - Replicated Instance (2 or more clusters)
Covered Service Monthly Uptime P2: 4-hour response P1: 1-hour response P1: 15min response
with Multi-Cluster routing policy (3 or more >= 99.999%
Regions) Cloud Spanner - Multi-Regional Instance >= 99.999% Technical Support: Case (Email) Technical Support: Case (Email) and Phone
with Multi-Cluster routing policy (fewer than 3 >= 99.99% Cloud Spanner - Regional Instance >= 99.99% 8/5 response for high-impact issues 24/7 response for high-impact and critical issues
Regions)
Only English Support English, Japanese, Mandarin Chinese, and Korean
with Single-Cluster routing policy >= 99.9%
• Active Assist Recommender API
Cloud Bigtable - Zonal instance (single cluster) >= 99.9%
• Third Party Support
• Cloud Support API
Apigee • Technical support escalation
Covered Service Monthly Uptime • Access to purchase Technical • Technical Account Manager (TAM)
Apigee Standard >= 99% Account Advisor Service (TAAS) • Event management service
• Operational Health Reviews
Apigee Enterprise >= 99.99% for environments provisioned in 2 or more Regions (i.e., with the purchase of • Customer Aware Support
Additional Region / Distributed Network) with a dual-region, multi-regional, or global Cloud • New Product previews
KMS encryption key, or>= 99.9% for all other environments • Training credits
• Access to purchase Mission Critical Services
Apigee Enterprise Plus >= 99.99% for environments provisioned in 2 or more Regions with a dual-region, multi- • Access to purchase Assured Support
regional, or global Cloud KMS encryption key, or>= 99.9% for all other environments 203 FREE $29 per month + 3% net spend $500 per month + 3% net spend $$$$ Contact Sales 204
Active Assist Recommender Cloud Support API
Active Assist, a portfolio of intelligent tools and capabilities to help actively
assist you in managing complexity in your cloud operations. Cloud Support API allows you to integrate google cloud’s customer
care within your organizations Customer Relationship Manager (CRM)

The API Supports


• Create and manage support cases.
• List, create, and download attachments for cases.
• List and create comments in cases.

Helps with 3 key activities:


• making proactive improvements to your cloud with smart recommendations
• preventing mistakes from happening in the first place by giving you better analysis
• helping you figure out why something went wrong by using intuitive troubleshooting tools
The Cloud Support API is available to Customer Care customers with Enhanced or Premium Support.
205 206

Third-Party Technology Support Technical Account Advisor Service


With Third-Party Technology Support Google Cloud support will assist you with integrating non Google Technical Account Advisor Service (TAAS) provides both proactive guidance
services and open-source technologies that are running or integrating with Google Cloud services.
and reactive support to help you succeed with your Cloud journey.
There are 3 approaches to delivering Third-Party Technology Support:
• Collaborative support
• Google Cloud partners with other companies to create a joint support experience
• NetApp Cloud Volumes for Google Cloud TAAS deliver the following services:
• IBM Power for Google Cloud • Guided onboarding to help you get started with Enhanced Support and set up your operations
• F5 Networks BIG-IP as used with Anthos products with Google Cloud.
• Dell Technologies - PowerScale for Google Cloud
• Best practices and additional support for your most critical cases, including proactive monitoring
• DataStax Astra on Google Cloud
• Databricks and guidance on case escalation.
• Workload centric support • Monthly, quarterly, and yearly reviews to assess your operational health across Google Cloud and
• Google Cloud has expertise in a variety of third-party technologies and can assist with deliver recommendations for improving your usage of Enhanced Support.
the setup, configuration, and troubleshooting of those technologies
• Third-party support • Recommended training paths and courses tailored to your organization's needs.
• Google Cloud provides commercially reasonable assistance with installation,
configuration, and troubleshooting of third-party software
• Operating Systems When you purchase TAAS, you pay a monthly fee, with a minimum 1-year contract.
• Databases After the first year, your contract is month-to-month.
• Web Servers
• DevOps Tools
• SQL Server

Third-Party Technology Support is available to Customer Care customers with Enhanced or Premium Support. 207 Third-Party Technology Support is available to Customer Care customers with Enhanced or Premium Support. 208
Premium Support — Assured Support Premium Support — Mission Critical Services
Mission Critical Services assess and mitigate potential service disruptions for environments that are essential to
Assured Support enables you to secure your regulated workloads and an organization and cause significant impact to operations when disrupted. To prepare you for this service,
accelerate your path to running compliant workloads on Google Cloud. Google Cloud analyzes your current operations and onboards you to Mission Critical Operations mode, a mode
standardized by Google.

To help you meet your compliance requirements, The onboarding process includes the following:
Assured Support ensures that your workloads are • Assessing key elements of your mission critical environment, including architecture, observability,
Regulated Workloads measurement, and control.
handled by Google support personnel that possess • FedRAMP Moderate Technical Support Services
certain attributes. • Delivering a gap analysis to help you prepare for mission critical operations.
• US regions and Support Technical Support Services • Bringing your organization into Mission Critical Operations mode to drive continuous improvement of your
• IL4 Technical Support Services environment through proactive and preventative engagement.
The supported personnel attributes include • CJIS Technical Support Services
geographical access location (United States only), • FedRAMP High Technical Support Services
background checks, and "US Person" status. After you've onboarded, you receive the following services:
• Drills, testing, and training for your mission critical environments
• Customer-centric incident reporting
• Proactive monitoring and case generation
• Priority 0 (P0) support case filing privileges with 5-minute response time
• War room incident management
• Impact prevention follow-ups
209 210

Premium Support — Customer Aware Support Premium Support — Operational Health Reviews

Customer Aware Support is a service that provides you with a jump start to resolving Operational Health Reviews help you measure your progress and
technical issues and improving your Premium Support experience. proactively address blockers to your goals with Google Cloud.

While onboarding your organization to Premium Support, your TAM focuses on


building Customer Aware Support. The reviews serve as a regular touchpoint with your TAM where you can discuss various
topics related to your Customer Care experience, including:
Customer Care creates Customer Aware Support by learning about and maintaining information • The efficiency of your cloud operations, including support trends.
about your architecture, partners, and Google Cloud projects. This information ensures that our • Analysis of trends in operational metrics.
Technical Support Engineers can resolve your support cases promptly and efficiently. • Incidents, case escalations, and outages.
• Tracking of open cases.
• Status reports for high-priority Cloud projects.

211 212
Premium Support — Event Management Service Premium Support — Training Credits

Premium Support's Event Management Service for planned peak events, such as a product
launch or major sales event. With this service, Customer Care partners with your team to create
a plan and provide guidance throughout the event.

With Event Management Service, your team is supported with the following tasks: With Premium Support, you receive training credits for the Google Cloud Qwiklabs
• Preparing your systems for key moments and heavy workloads. that you can distribute to users in your organization. Your TAM identifies learning
• Running disaster tests to proactively resolve potential issues. opportunities and indicates which training resources can be most beneficial to your
• Developing and implementing a faster path to resolution to reduce the impact of any issues
organization. With this training, your developers have the resources to find answers
that might occur.
quickly and test out ideas in safe environments.
After the event, your TAM works with you to review the outcomes and make recommendations
for future events. For each 1-year contract with Premium Support, you receive 6,250 credits.

To initiate the Event Management Service for an upcoming event, contact your TAM.

213 214

Premium Support — New Product Previews Premium Support — Technical Account Manager
As a Premium Support customer, you are assigned a named Technical Account Manager (TAM). Technical Account
As a Premium Support customer, you have access to Previews of new Google Cloud products. By Managers are trusted technical advisors that focus on operational rigor, platform health, and architectural stability
previewing a product, you have the opportunity to prepare your architecture for a new solution for your organization.
before it becomes more broadly available to the market.
Your Technical Account Manager supports and guides you in the following ways:
With your organization's goals in mind, your TAM analyzes your Google Cloud projects and usage • Assists you with onboarding to Premium Support.
to identify opportunities to test and use new products and solutions. When your TAM identifies • Assesses your cloud maturity and works with you to create an adoption roadmap
an opportunity, they introduce you to the product team and help you gain access to the Preview. and operating model.
As you test the product, your TAM also shares your feedback with the product team. • Advises on best practices for using Google Cloud.
• Delivers frequent Operational Health Reviews.
In addition to working with your TAM, you can request and manage access to Previews via the • Connects you with Google technical experts, such as Product Managers and Support
Cloud Console. In the Cloud Console, you can check the status of your requests and manage Engineers.
which users in your organization have access to Previews. • Works with you on support cases and case escalations. For high-priority cases, your
TAM analyzes the incident and identifies root causes.

By default, you receive 8 hours per week of foundational technical account


management services. If you require more assistance, you can purchase
additional TAM services
215 216
Resource Hierarchy Environment Oriented Hierarchy
Resource — service-level resources that are used to process your workloads
Resource Management • you have one organization that contains one folder per environment simple to implement
how you should configure and grant access to cloud resources for your team • can pose challenges if you have to deploy services that are shared by multiple environments
setup and organization of the account-level resources
Domain — primary identity of your organization
• define which users should be associated with your org
• universally administer policy for your users and devices
• linked to either a Google Workspace or Cloud Identity account
• A Google Workspace or Cloud Identity can only have one org
Organization — root node of the Google Cloud hierarchy of resources
• define settings, permissions, and policies for all projects, folders, resources, and
Cloud Billing accounts it parents
• Organization is associated with exactly one Domain
• Using an Organization, you can centrally manage your Google Cloud resources and
your users' access with Proactive and Reactive management
Folders — logical grouping of projects and/or other folders
• Folders can be used to group resources that share common IAM policies
Projects — logical grouping of service-level resources There are 3 suggested hierarchy
• Projects can represent: teams, environments, organization units, business departments architectures you can use:
• basis for enabling services, APIs, and IAM permissions
• Environment-oriented
• A service-level resource can only belong to a single project
• Function-oriented
Labels categorize and filter you resources with key /value pairs
• great for cost tracking at a granular-level
• Granular access-oriented
217 218

Function-Oriented Hierarchy Granular-Access Oriented Hierarchy


• one organization that contains one
folder per business function • one organization that contains one folder per
• Each business function folder can business unit
contain multiple environment folders • Each business unit folder can contain one folder
• multiple business functions are apps, per business function
management, and information • Each business function folder can contain one
technology folder per environment
• more flexible compared to • most flexible and extensible option
environment-oriented • you need to spend a greater effort to manage
• gives you the same environment the structure, roles, and permissions
separation • network topology is more complex
• allows you to deploy shared
services
• function-oriented hierarchy is more
complex to manage than an
environment-oriented
• separate access by business

219 220
Billing Account Billing Account

Cloud Billing Account is used to define Cloud Billing Account VS Payments Profile
who pays for a given set of Google Cloud
• Is a cloud-level resource managed in the Cloud Console. • Is a Google-level resource managed at payments.google.com.
resources and is connected to a Google • Tracks all of the costs (charges and usage credits) • Connects to ALL of your Google services (such as Google Ads, Google
payments profile incurred by your Google Cloud usage Cloud, and Fi phone service).
• A Cloud Billing account can be linked to one or • Processes payments for ALL Google services (not just Google Cloud).
more projects. • Stores information like name, address, and tax ID (when required
• Project usage is charged to the linked Cloud Billing legally) of who is responsible for the profile.
account. • Stores your various payment instruments (credit cards, debit cards,
• Results in a single invoice per Cloud Billing account bank accounts, and other payment methods you've used to buy
• Operates in a single currency through Google in the past.)
• Defines who pays for a given set of resources • Functions as a document center, where you can view invoices,
• Is connected to a Google Payments Profile, which payment history, and so on.
includes a payment instrument, defining how you • Controls who can view and receive invoices for your various Cloud
pay for your charges Billing accounts and products.
• Has billing-specific roles and permissions to control
accessing and modifying billing-related functions
(established by IAM roles)

Billing account includes one or more billing contacts defined on the Payments profile
Billing can have sub-accounts for resellers, so you can bill resources to be paid by your customer
221 222

Billing Account Types Payment Profile Types

There are 2 types of Cloud Billing accounts: There are 2 types of payment profiles

Self-serve (or Online) account Individual


• Payment instrument is a credit or debit card or ACH direct debit, depending on availability in each • You're using your account for your own personal payments.
country or region. • If you register your payments profile as an individual, then only you can manage the profile. You
• Costs are charged automatically to the payment instrument connected to Cloud Billing account. won't be able to add or remove users, or change permissions on the profile.
• You can sign up for self-serve accounts online. Business
• The documents generated for self-serve accounts include statements, payment receipts, and tax • You're paying on behalf of a business, organization, partnership, or educational institution.
invoices, and are accessible in the Cloud Console. • You use Google payments center to pay for Play apps and games, and Google services like Google
Ads, Google Cloud, and Fi phone service.
Invoiced (or Offline) account • A business profile allows you to add other users to the Google payments profile you manage, so
• Payment instrument can be check or wire transfer. that more than one person can access or manage a payments profile.
• Invoices are sent by mail or electronically. • All users added to a business profile can see the payment information on that profile.
• Invoices are also accessible in the Cloud Console, as are payment receipts.
• You must be eligible for invoiced billing. Learn more about invoiced billing eligibility.

223 224
Charging Cycle Cloud Billing IAM Roles
Cloud Billing lets you control which users have administrative and cost viewing permissions for
For self-serve Cloud Billing accounts, your Google Cloud specified resources by setting Identity and Access Management (IAM) policies on the resources
costs are charged automatically in one of two ways To grant or limit access to Cloud Billing, you can set an IAM policy at the organization level,
the Cloud Billing account level, and/or the project level
• Monthly billing: Costs are charged on a regular monthly cycle. Cloud Billing roles in IAM
• Threshold billing: Costs are charged when your account has accrued a specific amount. • Billing Account Creator
• Create new self-serve (online) billing accounts
• Billing Account Administrator
For self-serve Cloud Billing accounts, your charging cycle is automatically assigned when you create the • Manage billing accounts (but not create them).
account. You do not get to choose your charging cycle and you cannot change the charging cycle. • Billing Account User
• Link projects to billing accounts
For invoiced Cloud Billing accounts, you typically receive one invoice per month and the amount of time you • Billing Account Viewer
• View billing account cost information and
have to pay your invoice (your payment terms) is determined by the agreement you made with Google. transactions
• Project Billing Manager
• Link/unlink the project to/from a billing account
• Billing Account Costs Manager
• Can view and export cost information of billing
225
accounts. 226

Billing Health Checks Budget Alerts

Billing Health Checks are Recommendations to avoid common billing issues


You can narrow the budget scope to:
• Specific Projects
• Specific Resources

budget alerts
multiple alert thresholds to reduce You can set multiple thresholds that preemptively
spending surprises and unexpected warn you when you approach your budgets limit
cost overruns.

Notification Options
• Email alerts to billing admins and users
• Link Monitoring email notification channels to this budget
• Connect a Pub/Sub topic to this budget

227 228
Billings Account Billing Reports

Within the Google Cloud under Billing you can get


granular details about your spend for GCP resources
Use the billing report to view and analyze your Google Cloud usage
costs using many selectable settings and filters.
Built-in billing reports Configuring various views of the Cloud Billing report can help you
answer questions like these:
Billings Reports
• How is my current month's Google Cloud spending trending?
• An interactive pricing explorer including graph visualization
• What Google Cloud project cost the most last month?
Cost Table Report
• A tabular breakdown of cost to analyze details of invoices • What Google Cloud service (for example, Compute Engine or Cloud
Cost Breakdown Report Storage) cost me the most?
• at-a-glance waterfall overview of monthly charges and credit • What are my forecasted future costs based on historical trends?
Pricing Report • How much am I spending by region?
• access SKU prices for Google's cloud services • What was the cost of resources with label X?
Your customized report views are saveable and shareable.

229 230

Cost Table Reports Cost Breakdown Report


Use the cost breakdown report for an at-a-glance waterfall
overview of your monthly costs and savings.

Use the cost table report to access and analyze the details of This report shows the following summarized view of monthly
your invoices and statements. charges and credits:

Because your generated invoice and statement PDFs only


• The combined costs of your monthly Google Cloud usage
contain simplified, summarized views of your costs, the cost
table report is available to provide invoice or statement cost at the on-demand rate, calculated using non-discounted
details, such as the following: list prices.
• Includes project-level cost details from your invoices and • Savings realized on your invoice due to negotiated pricing
statements, including your tax costs broken out by project. (if applicable to your Cloud Billing account).
• Includes additional details you might need, such as service • Savings earned on your invoice with usage-based credits,
IDs, SKU IDs, and project numbers. broken down by credit type (for example, committed use
• The report view is customizable and downloadable to CSV. discounts, sustained use discounts, free tier usage).
• Your invoice-level charges such as tax and adjustments (if
any) applied for that invoice month.
231 232
Pricing Report Pricing Overview
Use the pricing table report to access SKU prices for Google Cloud offers a various different pricing schemas that vary per
Google's cloud services, including Google Cloud,
Google Maps Platform, and Google Workspace, as of
service. Broadly there are 7 types of pricing
the date the report is viewed.

This report shows the following pricing information: Free-Trial — A risk-free trial period, with specific limitations
Free-Tier — Services that have minimum monthly limits of free-use.
• Displays SKU prices specific to the selected Cloud
Billing account. On-Demand — The standard price payed by hour, minute, seconds or milliseconds (varies per service)
• If your Cloud Billing account has negotiated Committed Use Discounts — A lower price than on-demand for agreeing to a 1 year or 3 year contract
contract pricing, each SKU displays the list price,
Sustained Use Discounts — Passive savings when using resources past a period of continuous use
your contract price, and your effective discount.
• If a SKU is subject to tiered pricing, each pricing Preemptible VM instances — Instances with deep savings but at the cost of being interrupted
tier for a SKU is listed as a separate row. Flat-Rate Pricing — Prefer a stable cost for queries rather than paying the on-demand (Only BigQuery)
• All the prices are shown in the currency of the
selected billing account. Sole-Tenant Node Pricing — dedicate compute eg. single-tenant virtual machines
• The report view is customizable and
downloadable to CSV for offline analysis.
233 234

Free Trial Free-Tier

90-day, $300 Free Trial


New Google Cloud and Google Maps Platform users can take advantage of a 90-day trial period that
includes $300 in free Cloud Billing credits to explore and evaluate Google Cloud and Google Maps All Google Cloud customers can use select Google Cloud products—like
Platform products and services. You can use these credits toward one or a combination of products.
Compute Engine, Cloud Storage, and BigQuery—free of charge, within
specified monthly usage limits.
Trial Limitations:
• You can't add GPUs to your VM instances When you stay within the Free Tier limits, these resources are not charged
• You can't request a quota increase against your Free Trial credits or to your Cloud Billing account's payment
• You can't create VM instances that are based on Windows Server images. method after your trial ends.
• You need to verify a credit card or other payment method to signup
• At end of trial to continue using Google Cloud, you must upgrade to a paid Cloud Billing account.
• upgrading early will end your trial

235 236
Free-Tier Free-Tier
App Engine Cloud Run
28 hours per day of "F" instances AutoML Vision 2 million requests per month
40 node hours for training and online prediction Cloud Vision
9 hours per day of "B" instances 360,000 GB-seconds of memory, 180,000 vCPU-seconds of 1,000 units per month
1 GB of egress per day 1 node hour for batch classification prediction compute time
The Google Cloud Free Tier is available 15 node hours for Edge training 1 GB network egress from North America per month
Firestore
only for the Standard Environment. The Free Tier is available only for Cloud Run.
BigQuery 1 GB storage
1 TB of querying per month 50,000 reads, 20,000 writes, 20,000 deletes per day
Artifact Registry Cloud Shell
10 GB of storage each month
0.5 GB storage per month Free access to Cloud Shell, including 5 GB of persistent disk storage
Cloud Build Cloud Source Repositories Google Kubernetes Engine
AutoML Natural Language
120 build-minutes per day Up to 5 users No cluster management fee for one Autopilot or Zonal cluster
5000 units of prediction per month
50 GB of storage per billing account. For clusters created in Autopilot mode, pods
Cloud Functions 50 GB egress are billed per second for vCPU, memory and disk resource
AutoML Tables
2 million invocations per month (includes both background and HTTP invocations) requests. For clusters created in Standard mode, each user node
6 node hours for training and prediction
400,000 GB-seconds, 200,000 GHz-seconds of compute time is charged at standard Compute Engine pricing.
5 GB network egress per month
AutoML Translation Cloud Storage
500,000 translated characters per month Cloud Logging and Cloud Monitoring 5 GB-months of regional storage (US regions only)
Free monthly logging allotment 5,000 Class A Operations per month
AutoML Video Intelligence Free monthly metrics allotment 50,000 Class B Operations per month
40 node hours for training 1 GB network egress from North America to all region destinations (excluding China and Australia) per month
5 node hours for prediction Cloud Natural Language API Free Tier is only available in us-east1, us-west1, and us-central1 regions. Usage calculations are combined across those regions.
5,000 units per month 237 238

Free-Tier On-Demand
Compute Engine
Google Maps Platform 1 non-preemptible f1-micro VM instance per month within: On-demand pricing is when you pay for a google cloud resource based on a
For more information, see the Pricing page.• us-west1, us-central1, us-east1 consumption-based model.
Pub/Sub 30 GB-months HDD
10 GB of messages per month 5 GB-month snapshot storage in the following regions: A consumption based model means you only pay for what you use, based on a
• us-west1, us-central1, us-east1, asia-east1, europe-west1
consumption metric:
1 GB network egress from North America to all region destinations (excluding
China and Australia) per month • By time: hourly, minutes, seconds, milliseconds
Speech-to-Text
60 minutes per month Your Free Tier f1-micro instance limit is by time, not by instance. Each month, • Can be multiplied by configuration variables: vCPUs and Mem
eligible use of all of your f1-micro instances is free until you have used a • By API calls: $1 every 1000 transactions
number of hours equal to the total hours in the current month. Usage
Video Intelligence API
calculations are combined across the supported regions.
1,000 units per month
On Demand is ideal for:
Google Cloud Free Tier does not include external IP addresses.
● low cost and flexible
Workflows
Compute Engine offers discounts for sustained use of virtual machines. Your ● only pay per hour
5,000 internal steps per month
2,000 external HTTP calls per month Free Tier use doesn't factor into sustained use. ● short-term, spiky, unpredictable workloads

GPUs and TPUs are not included in the Free Tier offer. You are always charged ● cannot be interrupted
for GPUs and TPUs that you add to VM instances. 239 ● For first time apps 240
Committed Use discounts (CUD) Sustained Use discounts (SUD)
Committed Use Discounts (CUD) lets commit to a contract for deeply Sustained use discounts are automatic discounts for running specific
discounted Virtual Machines on Google Compute Engine Compute Engine resources for a significant portion of the billing month
• simple and flexible, and require no upfront costs Sustained use discounts apply to the following resources
• ideal for workloads with predictable resource needs • The vCPUs and memory for:
• you purchase compute resource (vCPUs, memory, GPUs, and local SSDs) • general-purpose custom and predefined machine types
• Discounts apply to the aggregate number of vCPUs, memory, GPUs, and local SSDs • compute-optimized machine types
within a region • memory-optimized machine types
• not affected by changes to your instance's machine setup • sole-tenant nodes
• You commit for payment terms of 1 Years to 3 Years • 10% premium cost even if the vCPUs and memory in those nodes are covered by
• purchase a committed use contract for a single project committed use discounts
• purchase multiple contracts share across many projects by enabling Shared Discounts • GPU devices
• you are billed monthly for the resources you purchased for the duration of the term
• whether or not you use the services Applied on incremental use after you reach certain usage thresholds
• you pay only for the number of minutes that you use an instance,
57% — Most Machine Types and GPU • Compute Engine automatically gives you the best price
70% — Memory-Optimized Machine Types • There's no reason to run an instance for longer than you need it.
• automatically apply to VMs created by both Google Kubernetes Engine and Compute Engine.
• do not apply to
• VMs created using the App Engine flexible environment and Dataflow.
241 242
• E2 and A2 machine types.

Sustained Use discounts (SUD) Sustained Use discounts (SUD)

Sustained use discounts for up to 30% Sustained use discounts for up to 20%

• General-purpose N1 predefined and custom machine types


• memory-optimized machine types • General-purpose N2 and N2D predefined and custom machine types
• shared-core machine types • Compute-optimized machine types
• sole-tenant nodes
% at which incremental is
Usage level (% of month)
Usage level (% of month) % at which incremental is charged charged

0%–25% 100% of base rate 0%–25% 100% of base rate


25%–50% 80% of base rate 25%–50% 86.78% of base rate
50%–75% 60% of base rate 50%–75% 73.3% of base rate
75%–100% 40% of base rate 75%–100% 60% of base rate

243 244
Flat-Rate Pricing Preemptible VM instances
Preemptible Virtual Machines (pVMs) is an instance running at a lower price than normal instances
BigQuery offers flat-rate pricing for high-volume or enterprise but could be turned off at anytime (preempt) for customers who will pay the normal price.
customers who prefer a stable monthly cost for queries rather
than paying the on-demand price per GB of data processed GCP has idle Virtual Machines and they will offer discounts to ensure they are in use similar to:
• A hotel that will offer rooms at discount to avoid vacant rooms
• An airline that offers seats a discount to fill vacant seats
When you enroll in flat-rate pricing, you purchase dedicated query processing
capacity, measured in BigQuery slots. Preemptible VMs conditions
Preemptible VMs are good for:
• apps that are fault tolerant • Compute Engine might stop preemptible instances
Your queries consume this capacity, and you are not billed for bytes • workloads that are not time or availability sensitive at any time due to system events
processed. If your capacity demands exceed your committed capacity, • workloads than can resume or are okay restarting • Probability of a VM being stopped is low
• commonly use for batch and scientific processing • time of day and which region will vary
BigQuery will queue up slots, and you will not be charged additional fees. • always stops preemptible instances after they run
for 24 hours
• finite resource, pVMs might not always be avaliable
• cannot live-migrate from pVM to regular instances
To enable flat-rate pricing, use BigQuery Reservations.
• Not covered by Compute Engine SLA
245 246

Sole Tenant Node Pricing Google Pricing Calculator


A sole-tenant node (single tenant VM) is physical Compute Engine server Google Pricing Calculator is a free web-based cost calculating tool to generally calculate
that is dedicated to hosting only your project's VM instances. cost of various GCP resources. You do not need a GCP account to use this tool

You can create a shareable link or email the estimate to your organization or key stake holders
When you create sole-tenant nodes, you are billed for all of the vCPU and memory
resources on the sole-tenant nodes, plus a sole-tenancy premium, which is 10% of the
cost of all of the underlying vCPU and memory resources

Sustained use discounts apply to this premium, but committed use discounts do not.

After you create the node, you can place VMs on that node, and these VMs run for no additional cost.

vCPUs and GB of memory are charged a minimum of 1 minute.


After 1 minute of use, sole-tenant nodes are billed in 1 second increments

The price of a node type depends on the following:


• Number of vCPUs of the node type
• GBs of memory of the node type
• Region where you create the node 247 248
Dataproc Dataflow

What is Hadoop? Dataflow is a unified stream and batch data processing that's serverless, fast, and cost-effective
Hadoop is an open-source framework for distributed processing of large data sets
Hadoop allows you to distribute: • Stream analytics — ingest, process, and analyze fluctuating volumes of real-time
• large dataset across many servers servers eg HDFS
• computing queries across many servers eg. MapReduce data for real-time business insights
• Run various open-source big-data, distributed projects as components • Real-time AI — streaming events to Google Cloud’s Vertex AI and TensorFlow
Extended (TFX)
• ML Use cases, predictive analytics, fraud detection, real-time personalization,
Dataproc is a fully managed and highly scalable service for running Apache Spark,
anomaly detection
Apache Flink, Presto, and 30+ open source tools and frameworks.
• supported with CI/CD for ML through Kubeflow pipelines
Dataproc is a fully-managed Hadoop as a Service
• IoT Streaming — Sensor and log data processing
Use Dataproc for data lake modernization, ETL, and secure data science, at
planet scale, fully integrated with Google Cloud, at a fraction of the cost.

249 250

Dataflow

DataFlow SQL — use your SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI.
Flexible Resource Scheduling (FlexRS) — advanced scheduling techniques to reduce batch processing costs
Dataflow templates — easily share your pipelines across your organization and team
Vertex AI Notebook Integration
Private IPs — disable public IP and operate within the GCP network for added security
Horizonal scaling — automatically scales
Apache Beam — Integrate with Apache Beam

What is Apache Beam?


An open source, unified model for defining both batch and streaming data-parallel processing pipelines

DataFlow Prime — serverless, no-ops, auto-tuning architecture


Vertical Autoscaling — don't have to spend days determining the optimum configuration of resources for your pipeline
Right Fitting — custom resource configuration for each stage of the data pipeline, reducing waste
New diagnostics tools

251

You might also like