Slides 5
Slides 5
Beginner Experienced
Data Engineer Collaboration Engineer
I have no experience with GCP or cloud. I have practical experience working with GCP
Cloud Architect I have equivalent experience in other CSPs e.g. Azure, AWS
I have never written code or held a tech role
I have a strong background in technology
Cloud Developer
Cloud Security Engineer Machine Learning 20hours 5 hours
Cloud Digital Leader Cloud Engineer Engineer
12 hours (average study time)
• 50% lecture and labs
• 50% practice exams
Cloud DevOps Engineer
Cloud Network Engineer Cloud Database Recommended to study 1-2 hours a day for 14 days.
Engineer
What does it take to pass the exam? Where do you take the exam?
Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl
It is very hard to pass the GCP-CDL without practice exams! A “proctor” is a supervisor, or person who monitors students during an examination.
Each domain has its own weighting, this determines how many questions in a domain that will show up.
Exam Guide – Response Types Exam Guide – Duration
Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl
Exam Guide – Valid Until Real Talk About Certifications and Goals
Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl Cheat sheets, Practice Exams and Flash cards www.exampro.co/gcp-cdl
Cloud Certifications expect you already have a foundation of technical skills in:
• Programming and Scripting, SQL
• IT Networking
• Linux and Windows servers
You will need to add 250-500 hours of work to achieve full
• Project Management
• Developer Tools Developer Knowledge. Certifications assume you have
Valid for 36 months • Application Development Skills obtained these fundamental skills elsewhere.
• CompSci Algorithms
• and more…
3 years before recertification To fill technical gaps leverage freeCodeCamp large catalogue of general technical content.
GCP itself does not care about GCP Certifications for hiring for their own technical roles.
Certifications serve a structured way of learning with a goal post.
What is Cloud Computing? The Evolution of Cloud Hosting
Dedicated Server
One physical machine dedicated to a single business.
cloud com·put·ing Runs a single web app/site.
Very Expensive, High Maintenance, High Security*
noun
the practice of using a network of remote servers hosted on the Internet to store, Virtual Private Server
One physical machine dedicated to a single business.
manage, and process data, rather than a local server or a personal computer. The physical machine is virtualized into sub-machines
Runs multiple web apps/sites
Shared Hosting
On-Premises Cloud Providers One physical machine shared by hundreds of businesses
Relies on most tenants under-utilizing their resources.
• You own the servers • Someone else owns the servers Very Cheap, Very Limited.
• You hire the IT people • Someone else hires the IT people
• You pay or rent the real estate • Someone else pays or rents the real estate Cloud Hosting
• You take all the risk • You are responsible for configuring cloud services and Multiple physical machines act as one system.
The system is abstracted into multiple cloud services
code; someone else takes care of the rest. Flexible, Scalable, Secure, Cost-Effective, High Configurability
13 14
An American multinational technology cooperation A Cloud Service Provider (CSP) is a company that provides multiple Cloud Services,
headquartered in Mountain View, California and those Cloud Services can be chained together to create cloud architectures
Google was founded in 1996, and its claim The name of the Google search engine was a play
to fame was the Google Search Engine on the word "googol“
A googol means a large number, precisely 10100
10,000,000,000,000,000,000,000,000,000,000,00
0,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000.
15 16
What is Google Cloud Platform? What is Google Workspace?
Google calls their cloud provider service offering Google Workspace is a bundled offering of SaaS products for
team communication and collaboration for an organization.
Google Cloud Platform formally known as G-Suite
Commonly referred to as GCP
Google Calendar Google Drive
A cloud-based team calendar Cloud storage for documents and files
Gmail Google Sheets
A cloud-based email client Real-time collaborative spreadsheets
Google Slides
Real-time collaborative presentations
17 18
Cost-effective You pay for what you consume, no up-front cost. On-demand pricing or Pay-as- A cloud provider can have hundreds of cloud services that are grouped into various types of services.
you-go (PAYG) with thousands of customers sharing the cost of the resources The four most common types of cloud services for Infrastructure as a Service (IaaS) would be:
Global Launch workloads anywhere in the world. Just choose a region Compute Networking
Imagine having a virtual computer that Imagine having a virtual network being able to
Secure Cloud provider takes care of physical security. Cloud services can can run applications, programs, and code. define internet connections or network isolations
be secure by default, or you have the ability to configure access down
to a granular level.
Storage Databases
Reliable Data backup, disaster recovery, data replication, and fault tolerance Imagine having a virtual hard drive that Imagine a virtual database for storing
can store files reporting data or a database for general-
Scalable Increase or decrease resources and services based on demand purpose web application
Elastic Automate scaling during spikes and drop in demand GCP has over 60+ cloud services
Current The underlying hardware and managed software is patched, upgraded, The term “Cloud Computing” can be used to refer to all
and replaced by the cloud provider without interruption to you. categories, even though it has “compute” in the name.
19 20
Types of Cloud Computing Google’s Shared Responsibility Model
IaaS PaaS SaaS
SaaS Software as a Service For Customers The Shared Responsibility Model is a simple
visualization that helps determine what the
Content
Access Policies
A product that is run and managed by the service provider customer is responsible for and what Google is
responsible for related to GCP. Usage
Don’t worry about how the service is maintained. Deployment
It just works and remains available. The customer is responsible for the data and the Web application security
configuration of access controls that reside in GCP. Identity
PaaS Platform as a Service For Developers The customer is responsible for the configuration of Operations
Access and authentication
cloud services and granting access to users via
Focus on the deployment and management of your apps. permissions. Network security
Don’t worry about provisioning, configuring, or Guest OS, data & content
Google is generally responsible for the underlying
understanding the hardware or OS. Infrastructure.
Audit logging
Network
Public Cloud
Everything built on the Cloud Provider
Also known as Cloud-Native
Cross-Cloud
Private Cloud Using Multiple Cloud Providers
Everything is built on the company’s data centers Aka multi-cloud, “hybrid-cloud”
Also known as On-Premise
The cloud could be OpenStack
• Implementation • Implementation
• Configuration • Configuration
• Training • Training
• Physical Security
• Hardware
• IT Personal
• Maintenance
75% Savings
• Startups • Banks • Public Sector eg. Government
• SaaS offerings • FinTech, Investment Management • Super Sensitive Data eg. Hospitals
• New projects and companies • Large Professional Service providers • Large Enterprise with heavy regulation GCPs Responsibility
• Legacy on-premise eg. Insurance Companies 29 30
Horizontal Scaling
Scaling Out — Add more servers of the same size
Scaling In — Removing more servers of the same size
Vertical Scaling is generally hard for traditional architecture, so you’ll usually A common example is having a copy (secondary) of your database where all ongoing changes
only see horizontal scaling described with Elasticity. are synced. The secondary system is not in use until a failover occurs and it becomes the
35 primary database. 36
High Durability The Evolution of Computing
37 38
• You can run multiple Virtual Machines on one machine. • Virtual Machines (VMs) can host container runtimes, such
• Hypervisor is the software layer that lets you run the VMs as Docker, which manage and run containers.
• A physical server shared by multiple customers • Docker Daemon is the software layer that manages
• You are to pay for a fraction of the server containers.
• You’ll overpay for an underutilized Virtual Machine • Containers maximize resource utilization and are more
• You are limited by your Guest Operating System cost-effective.
• Multiple apps on a single Virtual Machine can result in conflicts • Containers share the host OS kernel, making them more
in resource sharing efficient than running multiple VMs with separate OS
• Easy to export or import images for migration instances.
• Easy to Vertical or Horizontally scale • Containers provide process and resource isolation, allowing
multiple applications to run side by side without conflicts.
39 40
The Evolution of Computing Global Infrastructure
What is global infrastructure?
*Dedicated VMs Containers Functions Global infrastructure refers the global presence of datacenters,
networking and cloud resources available to the customer.
• 40 Regions
• Are managed VMs running managed containers. • 121 Zones
• Known as Serverless Compute • 187 Network Edge Locations
• You upload a piece of code and choose the amount • 200+ Countries
of memory and duration.
• Only responsible for code and data, nothing else
• Very cost-effective, only pay for the time code is
running, VMs only run when there is code to be
executed
• Cold Starts are a side-effect of this setup
Edge networking is the practice of having compute and data storage resources A Zone is a physical location made up of one or more datacenter.
as closest as possible to the end user in order to deliver the lowest latency and to save bandwidth
Points of Presence (PoP) is an intermediate location between a GCP Region and the end user
A datacenter is a secured building that contains
This location could be a third-party datacenter or collection of hardware. hundreds of thousands of computers.
Global Infrastructure – Google Cloud for government Global Infrastructure – Google Cloud for government
Google Cloud has an alternate offering to GovCloud where FedRAMP workloads are authorized in
GCP's usual region’s datacenters. This scheme mitigates the disadvantage of a GovCloud offering.
51 52
Basic Network Infrastructure Terminology Global Infrastructure - Latency
54
Data
Data Data Data Data Data Data
Security and
Collection Storage Processing Analysis Sharing Monetization
Governance
67 68
Cloud Shell Google Cloud - Projects
Cloud Shell is a free online environment, with A Project in Google Cloud is a logical grouping of resources.
A cloud resource must belong to a project
• command-line access for managing your infrastructure
• online code editor for cloud development A project is made up of the:
• settings
• permissions
• other metadata
A project can't access another project's resources unless you use
Shared VPC or VPC Network Peering.
Resources within a single project can work together easily, for example
by communicating through an internal network, subject to the regions-
and-zones rules.
Each Google Cloud project has the following:
• A project name, which you provide.
• A project ID, which you can provide or Google Cloud can provide for you.
• A project number, which Google Cloud provides
71 72
Google Cloud Adoption Framework GCAF — Themes
Google Cloud Adoption Framework (GCAF) is a whitepaper that Learn — The quality and scale of the learning programs you • Who is engaged?
• determines an organization’s readiness to adopt google cloud have in place to upskill your technical teams. • How widespread is that engagement?
Your ability to augment your IT staff with experienced partners. • How concerted is the effort?
• steps to fill in the knowledge gaps • How effective are the results?
• develop new competencies
What is a whitepaper? Lead — The extent to which IT teams are supported by a mandate • How are the teams structured?
A report or guide that informs readers concisely about a complex issue from leadership to migrate to cloud • Have they got executive sponsorship?
It is intended to help readers understand an issue, solve a problem, or The degree to which the teams themselves are cross-functional, • How are cloud projects budgeted,
collaborative, and self-motivated. governed, assessed?
make a decision.
Whitepaper are generally PDFs but can be in HTML format as well. Scale: The extent to which you use cloud-native services that • How are cloud-based services provisioned?
reduce operational overhead and automate manual • How is capacity for workloads allocated?
The GCAF is composed of: processes and policies. • How are application updates managed?
• 4 themes — Learn, Lead, Scale, Secure
• 3 maturity phases — Tactical, Strategic, Transformational
• Cloud Maturity Scale — Matrix of Themes and Phases Secure — The capability to protect your services • What controls are in place?
• Epics — Workstreams to scope and structure cloud adoption from unauthorized and inappropriate access with a • What technologies used?
multilayered, identity-centric security model. • What strategies govern the whole?
• Programs — Logical Grouping of Epics Dependent also on the advanced maturity of the
73 other three themes. 74
Strategic: (Mid Term) A broader vision governs individual workloads, which are designed and developed with an eye
Self taught Teams by function, Change is slow and risk Fear of public internet
to future needs and scale. Tactical
heroic project manager
short-term 3rd party reliance ops heavy trust in private network
Have begun to embrace change, and people and processes are now involved in the adoption strategy.
IT teams are both efficient and effective, increasing the value of harnessing the cloud for your business operations.
Strategic Organized training New cross-functional Templates ensure good Central identity
mid-term 3rd party assisted cloud team governance without hybrid network
manual review
Transformational: (Long Term) Cloud operations are functioning smoothly,
The focus is on integrating the data and insights working in the cloud. All change is constant Trust only the right
Peer learning and sharing Cross-functional feature
Transformational teams; greater autonomy low risk and quickly fixed people, device and
3rd party staff augmentation
Existing data is transparently shared. New data is collected and analyzed. long-term services
Predictive and prescriptive analytics via Machine Learning (ML) is used.
People and processes are being transformed, which further supports technological changes. Organizations Maturity
IT is no longer a cost center, but has become instead a partner to the business. 75 76
GCAF — Epics GCAF —Programs
Programs is a logical grouping of epics that correlate to themes to allow you to focus specific adoption efforts
When you’ve determine where your organization is in People
the adoption process using the Cloud Maturity Scale People Operations
then you need to define Epics.
Behaviours Communication
Training Program Change Management
Epics are workstreams to scope and structure Learn Lead
cloud adoption External Sponsorship
• epics are defined so that they do not overlap Experience
• they are aligned to manageable groups of Team-work
Identity &
stakeholders Upskills
Access
• they can be further broken down into individual Cost
user stories Data Mgmt. Control
Networking
Tech Architecture
Process
If you are limited in time and resources Cloud Operation Model Secure Account Setup
Incident
focus on the epics in the coloured segements Resource Infra as Code
Management Scale Secure
since these align with Learn, Lead, Scale, and Management CI/CD
Secure Instrumentation
77 78
79 80
Compute App Engine
Compute Engine Virtual Machines Bare Metal Solution
Create and deploy scalable, high-performance VMs. Providing hardware to run specialized workloads with low App Engine is a Platform as a Service (PaaS) for your application.
latency on Google Cloud.
App Engine Platform as a Service Quickly deploy and scale web-applications without worrying about the underlying infrastructure
Build and deploy apps on a fully managed, highly scalable Cloud GPUs Think of it like the Heroku of GCP
platform without having to manage the underlying Add GPUs to your workloads for machine learning,
infrastructure. scientific computing, and 3D visualization.
Sole-tenant nodes Dedicated Virtual Machines
Use your favourite programing language
Google Kubernetes Engine (GKE)
Help meet compliance, licensing, and • Node.js, Java, Ruby, C#, Go, Python, or PHP
Reliably, efficiently, and securely deploy and
scale containerized applications on Kubernetes. management needs by keeping your instances • Bring-Your-Own-Language-Runtime (BuoLR)
physically separated with dedicated hardware. • custom docker container
The advantage of Kubernetes over Docker is the ability to Firestore No-SQL Document database
run containers distributed across multiple VMs A NoSQL document database designed for mobile Memorystore In-Memory
and web applications. It stores data in semi- A managed in-memory data store service that provides
structured documents within collections, making exceptionally high-performance caching. It's designed to
A unique component of Kubernetes are Pods.
it easy to manage and query data. accelerate data retrieval and reduce latency.
A pod is a group of one more containers with shared
storage, network resources and other shared settings. Firestore Realtime Database Migration Service (DMS)
Store extends Firestore's capabilities by A serverless service that facilitates the seamless migration
providing real-time data synchronization. It's of databases to Google Cloud SQL with minimal
Kubernetes is ideally for micro-service architectures where a downtime. It simplifies the migration process for
company has tens to hundreds of services they need to manage perfect for applications that require live data
85
updates and collaboration features.. businesses. 86
Databases can be generally categorized as either: Data warehouses generally perform aggregation
• Relational databases • aggregation is grouping data eg. find a total or average
• Structured data that strongly represents tabular data (tables, rows and columns) • Data warehouses are optimized around columns since
• Row-oriented or Columnar-oriented they need to quickly aggerate column data
• Non-relational databases
• Semi-structured that may or may not distantly resemble tabular data. Data warehouses are generally designed be HOT
• Hot means they can return queries very fast even though
Databases have a rich set of functionality: they have vast amounts of data
• specialized language to query (retrieve data)
• specialized modeling strategies to optimize retrieval Data warehouses are infrequently accessed meaning they aren’t
for different use cases intended for real-time reporting but maybe once or twice a a day or
• more fine tune control over the transformation of once a week to generate business and user reports.
the data into useful data structures or reports A data warehouse needs to consume data from a relational
Normally a databases infers someone is using a relational row-oriented data store 87 databases on a regular basis. 88
What is a Key / Value store? What is a Document store?
A key-value database is a type of non-relational database (NoSQL) that uses a simple key-value method to store data. A document store is a NOSQL database that stores documents as its primary data structure.
A document could be an XML but more commonly is JSON or JSON-Like
A key/value stores a unique Key Value Key values stores are dumb and fast. Document stores are sub-class of Key/Value stores
key alongside a value Data 1010101000101011001010010101001 They generally lack features like:
• Relationships The components of a document store compared to Relational database
Worf 0110101100010101010101011100010
• Indexes
Ro Laren 0010101001010110010101010101010 • Aggregation
Database (RDBMS) Database (Document)
Networking Networking
Virtual Private Cloud (VPC) is a logically isolated section of the Google Cloud Cloud Armor Cloud Load Balancing
Network where you can launch Google Cloud resources. Help protect your services against DoS Scale and distribute app access with high-
and web attacks. performance load balancing.
You choose a range of IPs using CIDR Range
CIDR Range of 10.0.0.0/16 = 65,536 IP Addresses Cloud CDN Cloud NAT
Cache your content close to your users using Google's Provision application instances without public IP
Subnets a logical partition of an IP network into multiple global network. addresses while allowing them to access the internet.
smaller network segments. You are breaking up your IP Traffic Director
Cloud DNS
range for VPCs into smaller networks. Deploy global load balancing across clusters and configure
Publish and manage your domain names using
Subnets need to have a smaller CIDR range than to the Google's reliable, resilient, low-latency DNS serving. sophisticated traffic control policies for open service mesh.
VPCs represent their portion.
Cloud Interconnect Cloud VPN
eg Subnet CIDR Range 10.0.0.0/24 = 256 IP Addresses
Connect your infrastructure to Google Cloud on your Securely extend your on-premises network to Google's
A Public Subnet is one that can reach the internet terms, from anywhere. network through an IPsec VPN tunnel.
Cloud Router
A Private Subnet is one that cannot reach the internet Dynamically exchange routes between your Google Cloud Virtual Private Cloud (VPC)
network and your on-premises networks using Border Gateway Protocol (BGP).
97
Serverless & Managed: No need for infrastructure management—Google handles scaling and updates.
Data Ingestion Data Analysis and ML
Real-Time & Flexible: Supports continuous data streaming and handles structured and unstructured data.
Data Prep and Storage BigQuery Multicloud Analytics: Analyze data across multiple clouds using open formats like Apache Iceberg.
BigQuery BigQuery ML
Efficient Storage & Querying: Columnar storage for fast queries, with built-in analytics and machine learning.
Cloud Storage Security & Governance: Centralized access control, encryption, and compliance tools.
BigQuery Looker Flexible Pricing: On-demand (BigQuery operates on a pay-per-query model) or reserved pricing to optimize costs.
Raw Refined
Google Sheets data data
BigQuery Google Data Studio
Databases
Partner BI Data Import: Import data in various formats like CSV, JSON, Avro, Parquet, and
Cloud Storage Services
Applications Google Sheets for analysis (Excel files aren't supported directly).
Unified Data Platform: Real-time access to multiple data sources for consistent, up-to-date
Looker is a business intelligence (BI) tool that allows users to explore, visualize, and
insights across teams.
share their company’s data to make informed business decisions. Personalization & Customization: Quickly create custom dashboards and reports (Looks)
for personalized insights.
Collaboration & Sharing: Easily share data and reports via email, links, or integrated tools.
Looker lets non-technical users analyze data through easy-to-use Development & Integration: Use LookML and APIs to customize data models and embed
dashboards and drag-and-drop tools, allowing them to create insights into other apps.
custom reports and insights without advanced data skills.
Colossus
Cluster-level file system, successor to the Google File System (GFS) provides the underlying
infrastructure for all Google Cloud storage services, from Firestore to Cloud SQL to Filestore, and
Cloud Storage.
104
What is Apigee? API Management
Apigee Corp. was an API management and predictive analytics Apigee API Platform API Gateway Cloud Endpoints API Gateway
software provider before its merger into Google Cloud. Develop, secure, deploy, and monitor your APIs everywhere. Develop, deploy, and manage APIs on Google Cloud.
Expensive, but has many advanced features Cheap and simple, good integrations with App Engine
Apigee is a founding member of the OpenAPI Initiative API Analytics Developer Portal
• OpenAPI 3.0 Specification is originally known as the Swagger Specification Get insight into operational and business Create a lightweight portal that enables
metrics for your APIs. developers and API teams, using a
API Monetization turnkey self-service platform.
OpenAPI Specification is an open-source standard for writing Realize value from your APIs with a flexible, easy-
declarative structure of an Application Programming Interface (API) to-use solution.
Apigee Sense
It can be written in either JSON or YAML format. Add intelligent behavior detection to
protect APIs from attacks.
Cloud Service Providers (CSPs) will have a fully-managed API service
Apigee Hybrid
offering known as an API Gateway. Manage APIs on-premises, on Google Cloud, or in a hybrid
environment.
These API Gateways generally support the OpenAPI standard so you can
Cloud Healthcare API
quickly import and export your APIs Help secure APIs that power actionable healthcare
105 insights. 106
Artifact Registry Cloud Source Repositories Tools for PowerShell Firebase Test Lab
Store, manage, and secure container images and language Manage code and extend your Git workflow by Use PowerShell to script, automate, and manage Test your mobile apps across a wide variety of devices
packages. connecting to Cloud Build, App Engine, Cloud Logging, Windows workloads running on Google Cloud. and device configurations.
Cloud Monitoring, Pub/Sub, and more.
Cloud SDK Tools for Visual Studio Firebase Crashlytics
Install a command-line interface to script and manage Cloud Scheduler Develop ASP.NET apps in Visual Studio on Google Cloud. Get clear, actionable insight into app issues.
Google Cloud products from your own computer. Schedule batch jobs, big data jobs, and cloud
infrastructure operations using a fully managed cron job Tools for Eclipse Tekton
Container Registry service. Develop apps in the Eclipse IDE for Google Cloud. Create CI/CD-style pipelines using Kubernetes-native
Store, manage, and secure your Docker container images. building blocks.
Gradle App Engine Plugin
Cloud Tasks Build your App Engine projects using Gradle.
Cloud Code Asynchronously execute, dispatch, and deliver Workflows
Extend your IDE with tools to write, debug, and deploy distributed tasks. Maven App Engine Plugin Orchestrate and automate Google Cloud and HTTP-
Kubernetes applications. Build and deploy your App Engine projects using Maven. based API services with serverless workflows.
Cloud Code for IntelliJ
Cloud Build Debug production cloud apps inside IntelliJ. Eventarc
Continuously build, test, and deploy containers, Java Build event-driven solutions by asynchronously
archives, and more using the Google Cloud infrastructure. delivering events from Google services, SaaS, and your
109 own apps. 110
OpenCue
Manage complex media rendering tasks using an open source
render manager.
Transcoder API
Convert video files and package them for optimized delivery to web, mobile
and connected TVs.
113 114
Google’s Operations Suite allows you to monitor, log, trace, and profile your apps and services.
Cloud Monitoring
Cloud Monitoring provides visibility into the
performance, availability, and overall health of cloud- Google Maps Platform
powered applications. Integrate static and dynamic maps
Service Level Monitoring
into your apps.
Define and measure availability, performance and other
service levels for cloud-powered applications.
Chrome Enterprise
Cloud Logging and Error Reporting
Use Chrome management policies to
Application Performance Management (APM)
meet productivity and security needs.
Cloud Logging Cloud Trace
Store, search, analyze, monitor, and alert on log data Find performance bottlenecks in production.
and events from Google Cloud and AWS. Cloud Debugger
Error Reporting Investigate code behavior in production.
Identify and understand application errors. Cloud Profiler
Continuously gather performance information using a low-
impact CPU and heap profiling service. 115 116
Firebase Pub/Sub
Firebase is Google’s fully-managed platform for rapidly
Google Cloud Pub/Sub is a messaging service that lets different applications
developing and deploying web and mobile applications.
communicate by sending and receiving messages in real-time.
Platform as a Service utilizing Serverless technology
Streaming & Real-Time Processing: Pub/Sub supports low-latency messaging for real-time
Firebase offers the following services and features:
data pipelines, enabling fast event-driven workflows.
• Cloud Firestore • Test Lab Publisher & Subscriber Model: Publishers send events to a topic, and subscribers receive
• Machine Learning • App Distribution
them asynchronously, separating event creation from processing.
• Cloud Functions • Google Analytics
• Authentication • In-App Messaging Seamless Integrations: Works with tools like Dataflow for streaming, BigQuery for
• Hosting • Predictions analytics, and Cloud Storage for distribution.
• Cloud Storage • A/B Testing
Use Cases: Ideal for real-time application data, IoT streams, cache refreshing, load
• Realtime Database • Cloud Messaging
• Crashlytics • Remote Config balancing, and database replication.
• Performance Monitoring • Dynamic Links
There are two service options
Firebase is an alternative to Google Cloud for users who want to focus on building and • Standard Pub/Sub: Fully managed, reliable, and auto-scalable.
deploying their application in a highly opinionated framework. • Pub/Sub Lite: Lower-cost, with manual capacity management and zonal storage.
117
Dataflow is a unified stream and batch data processing that's serverless, fast, and cost-effective
DataFlow SQL — use your SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI. Database Migration Service (DMS) When you’re migrating open-souce relational databases
Serverless, easy, minimal downtime migrations to Cloud SQL.
Flexible Resource Scheduling (FlexRS) — advanced scheduling techniques to reduce batch processing costs
Dataflow templates — easily share your pipelines across your organization and team
BigQuery Data Transfer Service
Vertex AI Notebook Integration Automate scheduled data movement into BigQuery using a fully managed data import service.
Private IPs — disable public IP and operate within the GCP network for added security
Horizonal scaling — automatically scales
Apache Beam — Integrate with Apache Beam Migrate to Virtual Machines formerly Migrate for Migrate for Anthos
Compute Engine When you’re migrating containers
When you’re migrating VMs Migrate VMs from on-premises or other
What is Apache Beam? Migrate servers and VMs from on-premises or clouds directly into containers in GKE.
An open source, unified model for defining both batch and streaming data-parallel processing pipelines another cloud to Compute Engine.
Rehost (Lift-and-Shift)
Move workloads from a source environment to a target Replatform (Lift and Optimize)
environment with minor or no modifications or refactoring. Move workloads to the cloud with minor optimizations for cloud benefits.
Ideal when
Ideal when
• a workload can operate as-is in the target environment
• The app can run in the cloud but needs slight optimization.
• little or no business need for change
• Small changes improve cloud performance or cost-efficiency.
Considerations
Considerations
• Requires the least amount of time because the amount of refactoring is kept to a minimum
• Requires more effort than lift-and-shift.
• Team can continue to use the same set of tools and skills that they were using before
• Optimizations might need code or configuration changes.
• Doesn’t take full advantage of cloud platform features:
• Learning new cloud features or tools may be necessary.
• horizontal scalability
• fine-grained pricing
• highly managed services
Types of Migration — Refactor (Move and Improve) Types of Migration – Rebuild (Remove and Replace)
Refactor (Move and Improve) Rebuild (Remove and Replace; sometimes called Rip and Replace)
Modify and modernize the workload while migrating decommission an existing app and completely redesign and rewrite
to take advantage of cloud-native capabilities it as a cloud-native app
Ideal when
• Current app architecture isn’t cloud-ready. Ideal when
• Major updates or improvements are needed for performance. • current app isn't meeting your goals
• you want to remove legacy technical debt
Considerations
• Takes more time than basic migration. Considerations:
• Requires refactoring the code during migration. • Requires the most amount of time to develop Rebuild
• Demands extra effort and new skills for optimizing in the cloud. • Requires the most amount of learning
Migration Path Migration Path — Phase 1
There are four phases of your migration In the assessment phase, you gather information about the workloads
Assess. perform a thorough assessment and discovery of your Deploy. design, implement and execute a you want to migrate and their current runtime environment.
existing environment in order to understand your app and deployment process to move workloads to
environment inventory, identify app dependencies and Google Cloud. You might also have to refine Take inventory
requirements, perform total cost of ownership calculations, and your cloud infrastructure to deal with new Build a list all of your machines, hardware specifications, operating systems, and licenses
establish app performance benchmarks. needs. Catalog Apps
Build a catalog matrix to help you organize apps into categories based on their complexity and risk in moving to Google Cloud
Educate your organization about Google Cloud
train and certify your software and network engineers on how the cloud works and what Google Cloud products
Anthos is a modern application management platform used for managing hybrid architectures Migrate for Anthos and Google Kubernetes Engine (GKE) is a tool to move and automatically convert workloads
that span from Google Cloud to other AWS or on-premise datacenters running VMWare. directly into containers in Google Kubernetes Engine (GKE) and Anthos
Anthos is a single control plane to manage With Migrate for Anthos, you can migrate your VMs from supported source platforms to:
Kubernetes compute in hybrid scenarios • Google Kubernetes Engine (GKE)
• Anthos
• Anthos clusters on VMware
Core component of Anthos
• Anthos clusters on AWS
• Infrastructure, container, and cluster management
• Managed Service Mesh
• Multicluster management Use auto-generated container artifacts including container images, Dockerfiles, deployment
• Configuration management YAMLs and persistent data volumes to deploy migrated workloads and integrate with services such
• Migration as Anthos Service Mesh, Anthos Config Management, Stackdriver, and Cloud Build for
• Service management maintenance using CI/CD pipelines.
• Serverless
• Secure software supply chain
Migrate for Anthos is offered at no charge and no Anthos subscription is required when migrating to GKE.
• Logging and monitoring
Charges for other GCP services (e.g. compute, storage, network, etc.) still apply.
• Marketplace
135 136
Storage Transfer Service Transfer Appliance
Storage Transfer Service allows you to quickly import online data into Cloud Storage Transfer Appliance is a hardware appliance you can use to securely migrate large volumes of data
Migrate hundreds of terabytes up to 1 petabyte
Set up a repeating schedule for transferring data, as well as Comes in two types: Rackable (7 TB, 40 TB, 300 TB) and Freestanding (40 TB, 300 TB)
transfer data within Cloud Storage, from one bucket to another
enables you to:
• Move or backup data to a Cloud Storage bucket either from other cloud storage providers or from your on-premises storage. Rackable
• Move data from one Cloud Storage bucket to another, so that it is available to different groups of users or applications.
• Periodically move data as part of a data processing pipeline or analytical workflow. Freestanding
Security Features (safe to connect) Security features (safe in transit) What is Artificial Intelligence (AI)?
• Tamper resistant • AES 256 encryption Machines that perform jobs that mimic human behavior
• cannot be easily opened • Customer-managed encryption keys
• apply tamper-evident tags to the shipping case • NIST 800-88 compliant data erasure What is Machine Learning (ML)?
• Ruggedized Machines that get better at a task without explicit programming
• Trusted Platform Module (TPM) chip What is Deep Learning (DL)?
• immutable root filesystem and software components haven't been tampered with Machines that have an artificial neural network inspired by
• Hardware attestation the human brain to solve complex problems.
• validate the appliance before you can connect it to your device and copy data to it
What is GenAI?
Generative AI is a specialized subset of AI that
Performance Features:
generates out content eg: Image, Video, Text, Audio
• All SSD drives — no moving parts, very fast IOPs
• Multiple network connectivity options — 10Gbps or 40Gbps transfer speed
• Scalability with multiple appliances — use multiple appliance to increase transfer speed
• Globally distributed processing — ships quickly to and from the the datacenter to Google Cloud
• Minimal software — use common software already on your Linux or Mac, Windows system
139
AI vs Machine Learning AI/ML vs Data Analytics/BI
Artificial Intelligence (AI) Machine Learning (ML) AI/ML Data Analytics / Business Intelligence (BI)
Functionality AI focuses on understanding and decision-making ML learns from data to make predictions or Functionality AI/ML uses models to predict outcomes and Data Analytics/BI analyzes historical data for
decisions. automate decision-making. insights.
Data Handling AI analyzes and makes decisions based on existing ML uses data to train models and make Data Handling AI/ML learns from large datasets to make Data Analytics/BI processes and visualizes existing
data predictions. predictions and automate tasks. data to find patterns.
Applications Spans across various sectors, including data Used in recommendation systems, fraud Applications Used in automation, predictive analytics, Used in reporting, dashboards, and decision-
analysis, automation, natural language processing, detection, and predictive analytics. personalization, and innovation. making based on past trends.
and healthcare.
Outcome AI/ML creates automated processes and Data Analytics/BI provides descriptive and
continuous learning from data. diagnostic insights for decision-making.
Text-to-Speech Speech-to-Text
157 Convert text to natural-sounding speech using ML. Convert speech to text using the power 158
of ML.
BigQuery ML lets you build machine learning models directly in BigQuery using SQL, User-friendly: Empowers SQL users to build ML models without Python or Java.
making ML accessible to data analysts without coding expertise. Faster Development: ML models are created within BigQuery, avoiding the need for data movement.
Multiple Interfaces: Accessible via Google Cloud Console, BigQuery API, and more.
Vertex AI Integration: Manage and deploy BigQuery ML models with Vertex AI.
Types of models BigQuery ML can create:
• Linear Regression Supported Models: Offers built-in models like linear regression and integrates externally trained models from Vertex AI
• Logistic Regression Pretrained Models: Import models from ONNX, TensorFlow, and others for prediction.
• K-means Clustering
• Time Series Forecasting (ARIMA/Auto-ARIMA)
• Deep Neural Networks (DNN)
• Matrix Factorization
• Principal Component Analysis (PCA)
• And more
166
Internet communication Hardware infrastructure From the physical premises to the purpose-
reCAPTCHA Enterprise
• Communications over the internet to our public cloud services built servers, networking equipment, and custom security chips to the
Help protect your website from fraudulent low-level software stack running on every machine, our entire
are encrypted in transit.
activity, spam, and abuse. • network and infrastructure have multiple layers of protection hardware infrastructure is Google-controlled, -secured, and -hardened.
to defend our customers against denial-of-service attacks.
Web Risk Data centers Google data centers feature layered security with custom-
Detect malicious URLs on your website and in Identity designed electronic access cards, alarms, vehicle access barriers,
client applications. • Identities, users, and services are strongly authenticated. perimeter fencing, metal detectors, biometrics, and laser beam
• Access to sensitive data is protected by advanced tools like intrusion detection. They are monitored 24/7 by high-resolution
phishing-resistant security keys. cameras that can detect and track intruders. Only approved employees
with specific roles may enter.
Storage services
• Data stored on our infrastructure is automatically encrypted at
Continuous availability Infrastructure underpins how Google Cloud
rest and distributed for availability and reliability.
delivers services that meet our high standards for performance,
• guards against unauthorized access and service interruptions.
resilience, availability, correctness, and security. Design, operation, and
169 170
delivery all play a role in making services continuously available.
Downloadable PDFs that prove that GCP Payment Card Industry Data Security Standard (PCI DSS)
is compliant with various compliance and a set of security standards designed to ensure that ALL companies that accept, process, store or
security standards transmit credit card information maintain a secure environment.
Personal Health Information Protection Act (PHIPA) Federal Risk and Authorization Management Program (FedRAMP)
An Ontario provincial law (Canada) that regulates patient Protected Health Information US government standardized approach to security authorizations
for Cloud Service Offerings
Health Insurance Portability and Accountability Act (HIPAA). Criminal Justice Information Services (CJIS)
US federal law that regulates patient Protected Health Information Any US state or local agency that wants to access the FBI's CJIS database is
required to adhere to the CJIS Security Policy.
173 174
Google provides resources on privacy regulations such as the LGPD, GDPR, CCPA, the
Australian Privacy Act, My Number Act, and PIPEDA, among others.
175 176
Cloud Armor Cloud Armor
Cloud Armor is a DDOS protection and Web Application Firewall (WAF) service
177 178
179 180
Google Cloud Data Loss Prevention BeyondCorp
Cloud Data Loss Prevention (DLP) detect and protect sensitive The Zero Trust model operates on the principle of “trust no one, verify everything.”
information within GCP storage repositories Malicious actors being able to by-pass conventional access controls
What is Personally identifiable information (PII) demonstrates traditional security measures are no long sufficient
any data that can be used to identify a specific individual:
• birthday, government ID, full name, email address, mailing address etc.. BeyondCorp is Google's implementation of the zero trust model
What is Personally/Protected Health information (PHI)
any data that can be used to identify health information about a patient BeyondCorp allows for: By shifting access controls from the network perimeter to
• single sign-on individual users, BeyondCorp enables secure work from
• access control policies virtually any location without the need for a traditional VPN.
• Provides tools to classify, mask, tokenize, and transform sensitive data • access proxy
• support for structured and unstructured data • user-based authentication The BeyondCorp principles:
• Create dashboards and audit reports • device-based authentication • Access to services must not be determined by the network from which you connect
• Automate tagging, remediation, or policy based on findings • authorization • Access to services is granted based on contextual factors from the user and their device
• Connect DLP results into Security Command Center, Data Catalog
• Access to services must be authenticated, authorized, and encrypted
• or export to your own Security Information and Event Management (SIEM) or governance tool
• Schedule inspection jobs directly in the console UI
• over 120 built-in Information Types (infoTypes)
• info types define what sensitive information to scan 181 182
183 184
VPC Service Controls Cloud Identity-Aware Proxy (IAP)
VPC Service Controls allows you to create a service perimeter Cloud Identity-Aware Proxy (IAP) lets you establish a central authorization layer for applications accessed by
HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls
VPC Service Perimeters function like a firewall for GCP APIs
You can define access policies centrally and apply them to all of your applications and resources.
Apply access levels Use IAP when you want to enforce access control policies for applications and resources.
Identity-Aware Proxy (IAP) lets you manage who When IAP turned
has access to services hosted on App Engine, on, in side-panel add
Compute Engine, or an HTTPS Load Balancer. members and their roles
Access policies are automatically created for you when you create an access level, service perimeter or turn on IAP.
They cannot be directly managed by the customer. 185 186
Identity and context-aware access control Easy adoption with our agentless approach Business Benefits of SecOps
• policies based on: user identity, device health, contextual factors • non-disruptive overlay to your existing architecture
Integrated threat and data protection • no need to install additional agents
• Reduced Risk of Data Breaches: Fix vulnerabilities to lower data breach risks.
• Prevent data loss, stop common threats Rely on Google Cloud’s global infrastructure • Increased Uptime: Swift response minimizes downtime and keeps services running.
• Real-time alerts and detailed reporting • scale, reliability, and security of Google's network
• Improved Compliance: Helps meet security regulations like GDPR (General Data Protection Regulation).
Support your environment: cloud, on-premises, or hybrid • 144 edge locations in over 200 countries and territories
• Access SaaS apps, web apps, and cloud resources wherever 187 • Enhanced Employee Productivity: Training reduces human error and promotes a secure work environment.
Data Sovereignty and Data Residency Directory Service
What is a directory service? Client
A directory service maps the names of network resources to their network
Data sovereignty ensures that data is subject to the laws of the country addresses. Client Client
where it is stored, protecting individual rights, such as GDPR in the EU.
A directory service is shared information infrastructure for locating,
managing, administering and organizing resources:
Data residency refers to the physical location of data storage, with some • Volumes
countries mandating that data be stored within their borders for compliance. • Folders
• Files Directory Service
• Printers
• Users Well known directory services:
How Google Cloud Helps Control Data Location • Groups • Domain Name Service (DNS)
• Google Cloud offers options to select regions for data storage, ensuring compliance with • Devices • the directory service for the internet
local regulations (e.g., EU-based regions). • Telephone numbers • Microsoft Entra ID
• other objects • Formerly Azure Active Directory
• Features like VPC Service Controls and Google Cloud Armor restrict data access and
A directory service is a critical component of a network operating system • Apache Directory Server
traffic location, ensuring compliance with data residency and sovereignty requirements. A directory server (name server) is a server which provides a directory service • Oracle Internet Directory (OID)
Each resource on the network is considered an object by the • OpenLDAP
directory server. Information about a particular resource is stored as • Cloud Identity
a collection of attributes associated with that resource or object • JumpCloud 190
193 194
OpenID
open standard and decentralized authentication protocol. Eg be able to login into a different social
Azure Active Directory
media platform using a Google or Facebook account
OpenID is about providing who are you
LDAP enables for same-sign on. Same sign-on allows users to single ID and password,
but they have to enter it in every time they want to login.
199 200
Service Level Agreements GCP — Service Level Agreements
What is a Service Level Agreement (SLA)? Compute Engine Cloud Storage
A SLA is a formal commitment about the expected level of service between a customer and provider.
When a service level is not met and if Customer meets its obligations under the SLA, Customer will be eligible to Covered Service Monthly Uptime Covered Service Monthly Uptime
receive the compensation eg. Financial or Service Credits Instances in Multiple Zones >= 99.99% Standard storage class in a multi-region or dual-region >= 99.95%
What is a Service Level Indicator (SLI)? A Single Instance >= 99.5% location of Cloud Storage
A metric/measurement that indicates what measure of performance a customer is receiving at a given time Load balancing >= 99.99% Standard storage class in a regional location of Cloud >= 99.9%
A SLI metric could be uptime, performance, availability, throughput, latency, error rate, durability, correctness Storage; Nearline or Coldline storage class in a multi-
region or dual-region location of Cloud Storage
What is a Service Level Objective (SLO)? Nearline or Coldline storage class in a regional location >= 99.0%
The objective that the provider has agreed to meet Availability SLA of 99.99% in a Cloud SQL, Cloud Functions of Cloud Storage; Durable Reduced Availability storage
SLOs are represented as a specific target percentage over a period of time. period of 3 months Monthly Uptime Percentage to class in any location of Cloud Storage
Customer of at least 99.95%
Third-Party Technology Support is available to Customer Care customers with Enhanced or Premium Support. 207 Third-Party Technology Support is available to Customer Care customers with Enhanced or Premium Support. 208
Premium Support — Assured Support Premium Support — Mission Critical Services
Mission Critical Services assess and mitigate potential service disruptions for environments that are essential to
Assured Support enables you to secure your regulated workloads and an organization and cause significant impact to operations when disrupted. To prepare you for this service,
accelerate your path to running compliant workloads on Google Cloud. Google Cloud analyzes your current operations and onboards you to Mission Critical Operations mode, a mode
standardized by Google.
To help you meet your compliance requirements, The onboarding process includes the following:
Assured Support ensures that your workloads are • Assessing key elements of your mission critical environment, including architecture, observability,
Regulated Workloads measurement, and control.
handled by Google support personnel that possess • FedRAMP Moderate Technical Support Services
certain attributes. • Delivering a gap analysis to help you prepare for mission critical operations.
• US regions and Support Technical Support Services • Bringing your organization into Mission Critical Operations mode to drive continuous improvement of your
• IL4 Technical Support Services environment through proactive and preventative engagement.
The supported personnel attributes include • CJIS Technical Support Services
geographical access location (United States only), • FedRAMP High Technical Support Services
background checks, and "US Person" status. After you've onboarded, you receive the following services:
• Drills, testing, and training for your mission critical environments
• Customer-centric incident reporting
• Proactive monitoring and case generation
• Priority 0 (P0) support case filing privileges with 5-minute response time
• War room incident management
• Impact prevention follow-ups
209 210
Premium Support — Customer Aware Support Premium Support — Operational Health Reviews
Customer Aware Support is a service that provides you with a jump start to resolving Operational Health Reviews help you measure your progress and
technical issues and improving your Premium Support experience. proactively address blockers to your goals with Google Cloud.
211 212
Premium Support — Event Management Service Premium Support — Training Credits
Premium Support's Event Management Service for planned peak events, such as a product
launch or major sales event. With this service, Customer Care partners with your team to create
a plan and provide guidance throughout the event.
With Event Management Service, your team is supported with the following tasks: With Premium Support, you receive training credits for the Google Cloud Qwiklabs
• Preparing your systems for key moments and heavy workloads. that you can distribute to users in your organization. Your TAM identifies learning
• Running disaster tests to proactively resolve potential issues. opportunities and indicates which training resources can be most beneficial to your
• Developing and implementing a faster path to resolution to reduce the impact of any issues
organization. With this training, your developers have the resources to find answers
that might occur.
quickly and test out ideas in safe environments.
After the event, your TAM works with you to review the outcomes and make recommendations
for future events. For each 1-year contract with Premium Support, you receive 6,250 credits.
To initiate the Event Management Service for an upcoming event, contact your TAM.
213 214
Premium Support — New Product Previews Premium Support — Technical Account Manager
As a Premium Support customer, you are assigned a named Technical Account Manager (TAM). Technical Account
As a Premium Support customer, you have access to Previews of new Google Cloud products. By Managers are trusted technical advisors that focus on operational rigor, platform health, and architectural stability
previewing a product, you have the opportunity to prepare your architecture for a new solution for your organization.
before it becomes more broadly available to the market.
Your Technical Account Manager supports and guides you in the following ways:
With your organization's goals in mind, your TAM analyzes your Google Cloud projects and usage • Assists you with onboarding to Premium Support.
to identify opportunities to test and use new products and solutions. When your TAM identifies • Assesses your cloud maturity and works with you to create an adoption roadmap
an opportunity, they introduce you to the product team and help you gain access to the Preview. and operating model.
As you test the product, your TAM also shares your feedback with the product team. • Advises on best practices for using Google Cloud.
• Delivers frequent Operational Health Reviews.
In addition to working with your TAM, you can request and manage access to Previews via the • Connects you with Google technical experts, such as Product Managers and Support
Cloud Console. In the Cloud Console, you can check the status of your requests and manage Engineers.
which users in your organization have access to Previews. • Works with you on support cases and case escalations. For high-priority cases, your
TAM analyzes the incident and identifies root causes.
219 220
Billing Account Billing Account
Cloud Billing Account is used to define Cloud Billing Account VS Payments Profile
who pays for a given set of Google Cloud
• Is a cloud-level resource managed in the Cloud Console. • Is a Google-level resource managed at payments.google.com.
resources and is connected to a Google • Tracks all of the costs (charges and usage credits) • Connects to ALL of your Google services (such as Google Ads, Google
payments profile incurred by your Google Cloud usage Cloud, and Fi phone service).
• A Cloud Billing account can be linked to one or • Processes payments for ALL Google services (not just Google Cloud).
more projects. • Stores information like name, address, and tax ID (when required
• Project usage is charged to the linked Cloud Billing legally) of who is responsible for the profile.
account. • Stores your various payment instruments (credit cards, debit cards,
• Results in a single invoice per Cloud Billing account bank accounts, and other payment methods you've used to buy
• Operates in a single currency through Google in the past.)
• Defines who pays for a given set of resources • Functions as a document center, where you can view invoices,
• Is connected to a Google Payments Profile, which payment history, and so on.
includes a payment instrument, defining how you • Controls who can view and receive invoices for your various Cloud
pay for your charges Billing accounts and products.
• Has billing-specific roles and permissions to control
accessing and modifying billing-related functions
(established by IAM roles)
Billing account includes one or more billing contacts defined on the Payments profile
Billing can have sub-accounts for resellers, so you can bill resources to be paid by your customer
221 222
There are 2 types of Cloud Billing accounts: There are 2 types of payment profiles
223 224
Charging Cycle Cloud Billing IAM Roles
Cloud Billing lets you control which users have administrative and cost viewing permissions for
For self-serve Cloud Billing accounts, your Google Cloud specified resources by setting Identity and Access Management (IAM) policies on the resources
costs are charged automatically in one of two ways To grant or limit access to Cloud Billing, you can set an IAM policy at the organization level,
the Cloud Billing account level, and/or the project level
• Monthly billing: Costs are charged on a regular monthly cycle. Cloud Billing roles in IAM
• Threshold billing: Costs are charged when your account has accrued a specific amount. • Billing Account Creator
• Create new self-serve (online) billing accounts
• Billing Account Administrator
For self-serve Cloud Billing accounts, your charging cycle is automatically assigned when you create the • Manage billing accounts (but not create them).
account. You do not get to choose your charging cycle and you cannot change the charging cycle. • Billing Account User
• Link projects to billing accounts
For invoiced Cloud Billing accounts, you typically receive one invoice per month and the amount of time you • Billing Account Viewer
• View billing account cost information and
have to pay your invoice (your payment terms) is determined by the agreement you made with Google. transactions
• Project Billing Manager
• Link/unlink the project to/from a billing account
• Billing Account Costs Manager
• Can view and export cost information of billing
225
accounts. 226
budget alerts
multiple alert thresholds to reduce You can set multiple thresholds that preemptively
spending surprises and unexpected warn you when you approach your budgets limit
cost overruns.
Notification Options
• Email alerts to billing admins and users
• Link Monitoring email notification channels to this budget
• Connect a Pub/Sub topic to this budget
227 228
Billings Account Billing Reports
229 230
Use the cost table report to access and analyze the details of This report shows the following summarized view of monthly
your invoices and statements. charges and credits:
This report shows the following pricing information: Free-Trial — A risk-free trial period, with specific limitations
Free-Tier — Services that have minimum monthly limits of free-use.
• Displays SKU prices specific to the selected Cloud
Billing account. On-Demand — The standard price payed by hour, minute, seconds or milliseconds (varies per service)
• If your Cloud Billing account has negotiated Committed Use Discounts — A lower price than on-demand for agreeing to a 1 year or 3 year contract
contract pricing, each SKU displays the list price,
Sustained Use Discounts — Passive savings when using resources past a period of continuous use
your contract price, and your effective discount.
• If a SKU is subject to tiered pricing, each pricing Preemptible VM instances — Instances with deep savings but at the cost of being interrupted
tier for a SKU is listed as a separate row. Flat-Rate Pricing — Prefer a stable cost for queries rather than paying the on-demand (Only BigQuery)
• All the prices are shown in the currency of the
selected billing account. Sole-Tenant Node Pricing — dedicate compute eg. single-tenant virtual machines
• The report view is customizable and
downloadable to CSV for offline analysis.
233 234
235 236
Free-Tier Free-Tier
App Engine Cloud Run
28 hours per day of "F" instances AutoML Vision 2 million requests per month
40 node hours for training and online prediction Cloud Vision
9 hours per day of "B" instances 360,000 GB-seconds of memory, 180,000 vCPU-seconds of 1,000 units per month
1 GB of egress per day 1 node hour for batch classification prediction compute time
The Google Cloud Free Tier is available 15 node hours for Edge training 1 GB network egress from North America per month
Firestore
only for the Standard Environment. The Free Tier is available only for Cloud Run.
BigQuery 1 GB storage
1 TB of querying per month 50,000 reads, 20,000 writes, 20,000 deletes per day
Artifact Registry Cloud Shell
10 GB of storage each month
0.5 GB storage per month Free access to Cloud Shell, including 5 GB of persistent disk storage
Cloud Build Cloud Source Repositories Google Kubernetes Engine
AutoML Natural Language
120 build-minutes per day Up to 5 users No cluster management fee for one Autopilot or Zonal cluster
5000 units of prediction per month
50 GB of storage per billing account. For clusters created in Autopilot mode, pods
Cloud Functions 50 GB egress are billed per second for vCPU, memory and disk resource
AutoML Tables
2 million invocations per month (includes both background and HTTP invocations) requests. For clusters created in Standard mode, each user node
6 node hours for training and prediction
400,000 GB-seconds, 200,000 GHz-seconds of compute time is charged at standard Compute Engine pricing.
5 GB network egress per month
AutoML Translation Cloud Storage
500,000 translated characters per month Cloud Logging and Cloud Monitoring 5 GB-months of regional storage (US regions only)
Free monthly logging allotment 5,000 Class A Operations per month
AutoML Video Intelligence Free monthly metrics allotment 50,000 Class B Operations per month
40 node hours for training 1 GB network egress from North America to all region destinations (excluding China and Australia) per month
5 node hours for prediction Cloud Natural Language API Free Tier is only available in us-east1, us-west1, and us-central1 regions. Usage calculations are combined across those regions.
5,000 units per month 237 238
Free-Tier On-Demand
Compute Engine
Google Maps Platform 1 non-preemptible f1-micro VM instance per month within: On-demand pricing is when you pay for a google cloud resource based on a
For more information, see the Pricing page.• us-west1, us-central1, us-east1 consumption-based model.
Pub/Sub 30 GB-months HDD
10 GB of messages per month 5 GB-month snapshot storage in the following regions: A consumption based model means you only pay for what you use, based on a
• us-west1, us-central1, us-east1, asia-east1, europe-west1
consumption metric:
1 GB network egress from North America to all region destinations (excluding
China and Australia) per month • By time: hourly, minutes, seconds, milliseconds
Speech-to-Text
60 minutes per month Your Free Tier f1-micro instance limit is by time, not by instance. Each month, • Can be multiplied by configuration variables: vCPUs and Mem
eligible use of all of your f1-micro instances is free until you have used a • By API calls: $1 every 1000 transactions
number of hours equal to the total hours in the current month. Usage
Video Intelligence API
calculations are combined across the supported regions.
1,000 units per month
On Demand is ideal for:
Google Cloud Free Tier does not include external IP addresses.
● low cost and flexible
Workflows
Compute Engine offers discounts for sustained use of virtual machines. Your ● only pay per hour
5,000 internal steps per month
2,000 external HTTP calls per month Free Tier use doesn't factor into sustained use. ● short-term, spiky, unpredictable workloads
GPUs and TPUs are not included in the Free Tier offer. You are always charged ● cannot be interrupted
for GPUs and TPUs that you add to VM instances. 239 ● For first time apps 240
Committed Use discounts (CUD) Sustained Use discounts (SUD)
Committed Use Discounts (CUD) lets commit to a contract for deeply Sustained use discounts are automatic discounts for running specific
discounted Virtual Machines on Google Compute Engine Compute Engine resources for a significant portion of the billing month
• simple and flexible, and require no upfront costs Sustained use discounts apply to the following resources
• ideal for workloads with predictable resource needs • The vCPUs and memory for:
• you purchase compute resource (vCPUs, memory, GPUs, and local SSDs) • general-purpose custom and predefined machine types
• Discounts apply to the aggregate number of vCPUs, memory, GPUs, and local SSDs • compute-optimized machine types
within a region • memory-optimized machine types
• not affected by changes to your instance's machine setup • sole-tenant nodes
• You commit for payment terms of 1 Years to 3 Years • 10% premium cost even if the vCPUs and memory in those nodes are covered by
• purchase a committed use contract for a single project committed use discounts
• purchase multiple contracts share across many projects by enabling Shared Discounts • GPU devices
• you are billed monthly for the resources you purchased for the duration of the term
• whether or not you use the services Applied on incremental use after you reach certain usage thresholds
• you pay only for the number of minutes that you use an instance,
57% — Most Machine Types and GPU • Compute Engine automatically gives you the best price
70% — Memory-Optimized Machine Types • There's no reason to run an instance for longer than you need it.
• automatically apply to VMs created by both Google Kubernetes Engine and Compute Engine.
• do not apply to
• VMs created using the App Engine flexible environment and Dataflow.
241 242
• E2 and A2 machine types.
Sustained use discounts for up to 30% Sustained use discounts for up to 20%
243 244
Flat-Rate Pricing Preemptible VM instances
Preemptible Virtual Machines (pVMs) is an instance running at a lower price than normal instances
BigQuery offers flat-rate pricing for high-volume or enterprise but could be turned off at anytime (preempt) for customers who will pay the normal price.
customers who prefer a stable monthly cost for queries rather
than paying the on-demand price per GB of data processed GCP has idle Virtual Machines and they will offer discounts to ensure they are in use similar to:
• A hotel that will offer rooms at discount to avoid vacant rooms
• An airline that offers seats a discount to fill vacant seats
When you enroll in flat-rate pricing, you purchase dedicated query processing
capacity, measured in BigQuery slots. Preemptible VMs conditions
Preemptible VMs are good for:
• apps that are fault tolerant • Compute Engine might stop preemptible instances
Your queries consume this capacity, and you are not billed for bytes • workloads that are not time or availability sensitive at any time due to system events
processed. If your capacity demands exceed your committed capacity, • workloads than can resume or are okay restarting • Probability of a VM being stopped is low
• commonly use for batch and scientific processing • time of day and which region will vary
BigQuery will queue up slots, and you will not be charged additional fees. • always stops preemptible instances after they run
for 24 hours
• finite resource, pVMs might not always be avaliable
• cannot live-migrate from pVM to regular instances
To enable flat-rate pricing, use BigQuery Reservations.
• Not covered by Compute Engine SLA
245 246
You can create a shareable link or email the estimate to your organization or key stake holders
When you create sole-tenant nodes, you are billed for all of the vCPU and memory
resources on the sole-tenant nodes, plus a sole-tenancy premium, which is 10% of the
cost of all of the underlying vCPU and memory resources
Sustained use discounts apply to this premium, but committed use discounts do not.
After you create the node, you can place VMs on that node, and these VMs run for no additional cost.
What is Hadoop? Dataflow is a unified stream and batch data processing that's serverless, fast, and cost-effective
Hadoop is an open-source framework for distributed processing of large data sets
Hadoop allows you to distribute: • Stream analytics — ingest, process, and analyze fluctuating volumes of real-time
• large dataset across many servers servers eg HDFS
• computing queries across many servers eg. MapReduce data for real-time business insights
• Run various open-source big-data, distributed projects as components • Real-time AI — streaming events to Google Cloud’s Vertex AI and TensorFlow
Extended (TFX)
• ML Use cases, predictive analytics, fraud detection, real-time personalization,
Dataproc is a fully managed and highly scalable service for running Apache Spark,
anomaly detection
Apache Flink, Presto, and 30+ open source tools and frameworks.
• supported with CI/CD for ML through Kubeflow pipelines
Dataproc is a fully-managed Hadoop as a Service
• IoT Streaming — Sensor and log data processing
Use Dataproc for data lake modernization, ETL, and secure data science, at
planet scale, fully integrated with Google Cloud, at a fraction of the cost.
249 250
Dataflow
DataFlow SQL — use your SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI.
Flexible Resource Scheduling (FlexRS) — advanced scheduling techniques to reduce batch processing costs
Dataflow templates — easily share your pipelines across your organization and team
Vertex AI Notebook Integration
Private IPs — disable public IP and operate within the GCP network for added security
Horizonal scaling — automatically scales
Apache Beam — Integrate with Apache Beam
251