Large Scale Deep Learning with TensorFlow Jen Aman
Large-scale deep learning with TensorFlow allows storing and performing computation on large datasets to develop computer systems that can understand data. Deep learning models like neural networks are loosely based on what is known about the brain and become more powerful with more data, larger models, and more computation. At Google, deep learning is being applied across many products and areas, from speech recognition to image understanding to machine translation. TensorFlow provides an open-source software library for machine learning that has been widely adopted both internally at Google and externally.
When it comes to Large Scale data processing and Machine Learning, Apache Spark is no doubt one of the top battle-tested frameworks out there for handling batched or streaming workloads. The ease of use, built-in Machine Learning modules, and multi-language support makes it a very attractive choice for data wonks. However bootstrapping and getting off the ground could be difficult for most teams without leveraging a Spark cluster that is already pre-provisioned and provided as a managed service in the Cloud, while this is a very attractive choice to get going, in the long run, it could be a very expensive option if it’s not well managed.
As an alternative to this approach, our team has been exploring and working a lot with running Spark and all our Machine Learning workloads and pipelines as containerized Docker packages on Kubernetes. This provides an infrastructure-agnostic abstraction layer for us, and as a result, it improves our operational efficiency and reduces our overall compute cost. Most importantly, we can easily target our Spark workload deployment to run on any major Cloud or On-prem infrastructure (with Kubernetes as the common denominator) by just modifying a few configurations.
In this talk, we will walk you through the process our team follows to make it easy for us to run a production deployment of our Machine Learning workloads and pipelines on Kubernetes which seamlessly allows us to port our implementation from a local Kubernetes set up on the laptop during development to either an On-prem or Cloud Kubernetes environment
Introduction to the new Tensorflow 2.x and the Coral AI Edge TPU hardware. The presentation introduces Tensorflow main features such as Sequential and Functional APIs, mobile support with Tensorflow Lite, web support with TensorflowJS and Google Cloud support with TFX.
In addition, the presentation introduces the new edge TPU architecture coming from Coral AI, including its main hardware features and description of the compiling flow.
Lock in your calendars for October because GDSC GTBIT is about to drop the Google Cloud Jams bomb, and guess what? You're on the VIP list! 🌟🥳 Join us for an electrifying session hosted by tech virtuoso Mukal himself And upskill your learning journey using the Google Skill Boost Portal and conquer those certifications !
[Giovanni Galloro] How to use machine learning on Google Cloud PlatformMeetupDataScienceRoma
This document provides an overview of machine learning capabilities on Google Cloud Platform. It discusses how machine learning is used across Google products to improve search ranking and more. It then summarizes the main machine learning capabilities available on GCP, including calling pre-trained models through APIs, building and training custom models on Cloud ML Engine, and using AutoML to build models with little machine learning expertise. The document also briefly introduces upcoming capabilities like Kubeflow for portable machine learning pipelines and AI Hub for discovering and sharing pre-built machine learning solutions.
These slides are made for the 2013 DevFest talks. It covers the main blocks of Google cloud platform: App engine, Compute Engine, storage options and more.
This document provides information about Marwa Ayad Mohamed and her presentation on machine learning with Google tools. It discusses artificial intelligence, machine learning, deep learning, and how these concepts are used for applications like image recognition, object recognition, smart email reply, voice recognition, self-driving cars, and more. It then describes TensorFlow, a popular machine learning library developed by Google, how the programming model works, and provides steps for installing TensorFlow and running demos like image recognition on Windows systems. Contact information is also included at the end.
This document provides an overview of using TensorFlow and Quarkus to build intelligent applications that serve machine learning models. It begins with an introduction and agenda. It then discusses TensorFlow and how it can be used to build and train machine learning models. It demonstrates how a TensorFlow model can be served using Quarkus and consumed via HTTP requests. The technical benefits of serving models with Quarkus are described. Finally, use cases, additional resources, and a Q&A section are outlined.
The PPT contains the following content:
1. What is Google Cloud Study Jam
2. What is Cloud Computing
3. Fundamentals of cloud computing
4. what is Generative AI
5. Fundamentals of Generative AI
6. Breif overview on Google Cloud Study Jam.
7. Networking Session.
Scale with a smile with Google Cloud Platform At DevConTLV (June 2014)Ido Green
What is new and hot on Google Cloud?
How can you work like a pro with some (or all) the new APIs and services... Here are some good starting points to follow.
2 day Deep Learning Workshop at Karunya - Session 2Rajagopal A
The document discusses strategies for succeeding in an AI journey using Keras deep learning on cloud. It recommends starting with TensorFlow 2.0 and Keras in Google Colab for easy prototyping without setup. The strategy includes iterating on models, performing deep distributed training using cloud GPUs/TPUs, and serving models online/offline. Keras supports multiple backends like TensorFlow, CNTK, and MXNet to avoid lock-in to a single ecosystem.
The document provides an overview of machine learning and artificial intelligence concepts. It discusses:
1. The machine learning pipeline, including data collection, preprocessing, model training and validation, and deployment. Common machine learning algorithms like decision trees, neural networks, and clustering are also introduced.
2. How artificial intelligence has been adopted across different business domains to automate tasks, gain insights from data, and improve customer experiences. Some challenges to AI adoption are also outlined.
3. The impact of AI on society and the workplace. While AI is predicted to help humans solve problems, some people remain wary of technologies like home health diagnostics or AI-powered education. Responsible development of explainable AI is important.
The content was modified from Google Content Group
Eric ShangKuan([email protected])
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
Machine learning at scale with Google Cloud PlatformMatthias Feys
Machine Learning typically involves big datasets and lots of model iterations. This presentation shows how to use GCP to speed up that process with ML Engine and Dataflow. The focus of the presentation is on tooling not on models or business cases.
Machine learning in the wild deploymentBirger Moell
This document provides an overview of machine learning concepts and deployment strategies. It discusses using Flask to host trained machine learning models, building an application with a trained Keras model, and testing the model using CURL requests. It also mentions Docker for deploying models as standalone applications and Kubernetes for automating container deployment and management. The document covers TensorFlow.js for loading pre-trained models in the browser and training models, as well as cloud platforms like AWS, Google, and Azure.
ML Platform Q1 Meetup: Airbnb's End-to-End Machine Learning InfrastructureFei Chen
ML platform meetups are quarterly meetups, where we discuss and share advanced technology on machine learning infrastructure. Companies involved include Airbnb, Databricks, Facebook, Google, LinkedIn, Netflix, Pinterest, Twitter, and Uber.
hadoop training in mumbai at Asterix Solution is designed to scale up from single servers to thousands of machines, each offering local computation and storage. With the rate at which memory cost decreased the processing speed of data never increased and hence loading the large set of data is still a big headache and here comes Hadoop as the solution for it.
http://www.asterixsolution.com/big-data-hadoop-training-in-mumbai.html
Today we’re seeing revolutionary changes in hardware and software that are democratizing machine learning (ML) and making it accessible to any developer or data scientist. Whether you’re new to ML or you’re already an expert, Google Cloud has a variety of tools to help you. Learn the options available and how they support the full machine learning lifecycle for both realtime and batch data.
Cloud Machine Learning can help make sense of unstructured data, which accounts for 90% of enterprise data. It provides a fully managed machine learning service to train models using TensorFlow and automatically maximize predictive accuracy with hyperparameter tuning. Key benefits include scalable training and prediction infrastructure, integrated tools like Cloud Datalab for exploring data and developing models, and pay-as-you-go pricing.
MongoDB World 2018: Building Intelligent Apps with MongoDB & Google CloudMongoDB
Building intelligent apps involves combining real-time analytics, machine learning, and artificial intelligence to provide personalized recommendations and automate tasks for customers. Developers can use MongoDB and Google Cloud to build intelligent apps in 3 steps: 1) create a base ecommerce app, 2) add a recommendation engine using machine learning, and 3) enable shopping via chat with artificial intelligence. This brings data scientists and developers together to create applications that understand and assist customers.
This document introduces Google Cloud Platform and its products and services. It provides an overview of compute, storage, database, machine learning, and other tools available in GCP. It also describes resources for learning about GCP, including hands-on labs, online courses, certifications, grants for education and startups, and free trials. The presentation aims to explain what GCP is and how users can leverage its scalable infrastructure and machine learning capabilities.
This document introduces Google Cloud Platform and provides an overview of its products and services. It describes how GCP allows users to build and host applications, store and analyze data, and leverage Google's computing infrastructure. Key products highlighted include Compute Engine, App Engine, Kubernetes Engine, Cloud Storage, Cloud Firestore, and Google Cloud's Machine Learning APIs. The document also lists various educational resources for learning GCP, such as Qwiklabs, Coursera courses, certifications, study jams, and startup programs.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-google-keynote
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jeff Dean, Senior Fellow at Google, presents the "Large-Scale Deep Learning for Building Intelligent Computer Systems" keynote at the May 2016 Embedded Vision Summit.
Over the past few years, Google has built two generations of large-scale computer systems for training neural networks, and then applied these systems to a wide variety of research problems that have traditionally been very difficult for computers. Google has released its second generation system, TensorFlow, as an open source project, and is now collaborating with a growing community on improving and extending its functionality. Using TensorFlow, Google's research group has made significant improvements in the state-of-the-art in many areas, and dozens of different groups at Google use it to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks.
In this talk, Jeff highlights some of ways that Google trains large models quickly on large datasets, and discusses different approaches for deploying machine learning models in environments ranging from large datacenters to mobile devices. He will then discuss ways in which Google has applied this work to a variety of problems in Google's products, usually in close collaboration with other teams. This talk describes joint work with many people at Google.
Kubeflow: portable and scalable machine learning using Jupyterhub and Kuberne...Akash Tandon
ML solutions in production start from data ingestion and extend upto the actual deployment step. We want this workflow to be scalable, portable and simple. Containers and kubernetes are great at the former two but not the latter if you aren't a devops practitioner. We'll explore how you can leverage the Kubeflow project to deploy best-of-breed open-source systems for ML to diverse infrastructures.
A workshop to demonstrate how we can apply agile and continuous delivery principles to continuously deliver value in machine learning and data science projects.
Code: https://github.com/davified/ci-workshop-app
Module4: Ventilation
Definition, necessity of ventilation, functional requirements, various system & selection criteria.
Air conditioning: Purpose, classification, principles, various systems
Thermal Insulation: General concept, Principles, Materials, Methods, Computation of Heat loss & heat gain in Buildings
This document provides information about Marwa Ayad Mohamed and her presentation on machine learning with Google tools. It discusses artificial intelligence, machine learning, deep learning, and how these concepts are used for applications like image recognition, object recognition, smart email reply, voice recognition, self-driving cars, and more. It then describes TensorFlow, a popular machine learning library developed by Google, how the programming model works, and provides steps for installing TensorFlow and running demos like image recognition on Windows systems. Contact information is also included at the end.
This document provides an overview of using TensorFlow and Quarkus to build intelligent applications that serve machine learning models. It begins with an introduction and agenda. It then discusses TensorFlow and how it can be used to build and train machine learning models. It demonstrates how a TensorFlow model can be served using Quarkus and consumed via HTTP requests. The technical benefits of serving models with Quarkus are described. Finally, use cases, additional resources, and a Q&A section are outlined.
The PPT contains the following content:
1. What is Google Cloud Study Jam
2. What is Cloud Computing
3. Fundamentals of cloud computing
4. what is Generative AI
5. Fundamentals of Generative AI
6. Breif overview on Google Cloud Study Jam.
7. Networking Session.
Scale with a smile with Google Cloud Platform At DevConTLV (June 2014)Ido Green
What is new and hot on Google Cloud?
How can you work like a pro with some (or all) the new APIs and services... Here are some good starting points to follow.
2 day Deep Learning Workshop at Karunya - Session 2Rajagopal A
The document discusses strategies for succeeding in an AI journey using Keras deep learning on cloud. It recommends starting with TensorFlow 2.0 and Keras in Google Colab for easy prototyping without setup. The strategy includes iterating on models, performing deep distributed training using cloud GPUs/TPUs, and serving models online/offline. Keras supports multiple backends like TensorFlow, CNTK, and MXNet to avoid lock-in to a single ecosystem.
The document provides an overview of machine learning and artificial intelligence concepts. It discusses:
1. The machine learning pipeline, including data collection, preprocessing, model training and validation, and deployment. Common machine learning algorithms like decision trees, neural networks, and clustering are also introduced.
2. How artificial intelligence has been adopted across different business domains to automate tasks, gain insights from data, and improve customer experiences. Some challenges to AI adoption are also outlined.
3. The impact of AI on society and the workplace. While AI is predicted to help humans solve problems, some people remain wary of technologies like home health diagnostics or AI-powered education. Responsible development of explainable AI is important.
The content was modified from Google Content Group
Eric ShangKuan([email protected])
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
Machine learning at scale with Google Cloud PlatformMatthias Feys
Machine Learning typically involves big datasets and lots of model iterations. This presentation shows how to use GCP to speed up that process with ML Engine and Dataflow. The focus of the presentation is on tooling not on models or business cases.
Machine learning in the wild deploymentBirger Moell
This document provides an overview of machine learning concepts and deployment strategies. It discusses using Flask to host trained machine learning models, building an application with a trained Keras model, and testing the model using CURL requests. It also mentions Docker for deploying models as standalone applications and Kubernetes for automating container deployment and management. The document covers TensorFlow.js for loading pre-trained models in the browser and training models, as well as cloud platforms like AWS, Google, and Azure.
ML Platform Q1 Meetup: Airbnb's End-to-End Machine Learning InfrastructureFei Chen
ML platform meetups are quarterly meetups, where we discuss and share advanced technology on machine learning infrastructure. Companies involved include Airbnb, Databricks, Facebook, Google, LinkedIn, Netflix, Pinterest, Twitter, and Uber.
hadoop training in mumbai at Asterix Solution is designed to scale up from single servers to thousands of machines, each offering local computation and storage. With the rate at which memory cost decreased the processing speed of data never increased and hence loading the large set of data is still a big headache and here comes Hadoop as the solution for it.
http://www.asterixsolution.com/big-data-hadoop-training-in-mumbai.html
Today we’re seeing revolutionary changes in hardware and software that are democratizing machine learning (ML) and making it accessible to any developer or data scientist. Whether you’re new to ML or you’re already an expert, Google Cloud has a variety of tools to help you. Learn the options available and how they support the full machine learning lifecycle for both realtime and batch data.
Cloud Machine Learning can help make sense of unstructured data, which accounts for 90% of enterprise data. It provides a fully managed machine learning service to train models using TensorFlow and automatically maximize predictive accuracy with hyperparameter tuning. Key benefits include scalable training and prediction infrastructure, integrated tools like Cloud Datalab for exploring data and developing models, and pay-as-you-go pricing.
MongoDB World 2018: Building Intelligent Apps with MongoDB & Google CloudMongoDB
Building intelligent apps involves combining real-time analytics, machine learning, and artificial intelligence to provide personalized recommendations and automate tasks for customers. Developers can use MongoDB and Google Cloud to build intelligent apps in 3 steps: 1) create a base ecommerce app, 2) add a recommendation engine using machine learning, and 3) enable shopping via chat with artificial intelligence. This brings data scientists and developers together to create applications that understand and assist customers.
This document introduces Google Cloud Platform and its products and services. It provides an overview of compute, storage, database, machine learning, and other tools available in GCP. It also describes resources for learning about GCP, including hands-on labs, online courses, certifications, grants for education and startups, and free trials. The presentation aims to explain what GCP is and how users can leverage its scalable infrastructure and machine learning capabilities.
This document introduces Google Cloud Platform and provides an overview of its products and services. It describes how GCP allows users to build and host applications, store and analyze data, and leverage Google's computing infrastructure. Key products highlighted include Compute Engine, App Engine, Kubernetes Engine, Cloud Storage, Cloud Firestore, and Google Cloud's Machine Learning APIs. The document also lists various educational resources for learning GCP, such as Qwiklabs, Coursera courses, certifications, study jams, and startup programs.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-google-keynote
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jeff Dean, Senior Fellow at Google, presents the "Large-Scale Deep Learning for Building Intelligent Computer Systems" keynote at the May 2016 Embedded Vision Summit.
Over the past few years, Google has built two generations of large-scale computer systems for training neural networks, and then applied these systems to a wide variety of research problems that have traditionally been very difficult for computers. Google has released its second generation system, TensorFlow, as an open source project, and is now collaborating with a growing community on improving and extending its functionality. Using TensorFlow, Google's research group has made significant improvements in the state-of-the-art in many areas, and dozens of different groups at Google use it to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks.
In this talk, Jeff highlights some of ways that Google trains large models quickly on large datasets, and discusses different approaches for deploying machine learning models in environments ranging from large datacenters to mobile devices. He will then discuss ways in which Google has applied this work to a variety of problems in Google's products, usually in close collaboration with other teams. This talk describes joint work with many people at Google.
Kubeflow: portable and scalable machine learning using Jupyterhub and Kuberne...Akash Tandon
ML solutions in production start from data ingestion and extend upto the actual deployment step. We want this workflow to be scalable, portable and simple. Containers and kubernetes are great at the former two but not the latter if you aren't a devops practitioner. We'll explore how you can leverage the Kubeflow project to deploy best-of-breed open-source systems for ML to diverse infrastructures.
A workshop to demonstrate how we can apply agile and continuous delivery principles to continuously deliver value in machine learning and data science projects.
Code: https://github.com/davified/ci-workshop-app
Module4: Ventilation
Definition, necessity of ventilation, functional requirements, various system & selection criteria.
Air conditioning: Purpose, classification, principles, various systems
Thermal Insulation: General concept, Principles, Materials, Methods, Computation of Heat loss & heat gain in Buildings
Video Games and Artificial-Realities.pptxHadiBadri1
🕹️ #GameDevs, #AIteams, #DesignStudios — I’d love for you to check it out.
This is where play meets precision. Let’s break the fourth wall of slides, together.
UNIT-5-PPT Computer Control Power of Power SystemSridhar191373
Introduction
Conceptual Model of the EMS
EMS Functions and SCADA Applications.
Time decomposition of the power system operation.
Open Distributed system in EMS
OOPS
Tesia Dobrydnia brings her many talents to her career as a chemical engineer in the oil and gas industry. With the same enthusiasm she puts into her work, she engages in hobbies and activities including watching movies and television shows, reading, backpacking, and snowboarding. She is a Relief Senior Engineer for Chevron and has been employed by the company since 2007. Tesia is considered a leader in her industry and is known to for her grasp of relief design standards.
ISO 4020-6.1 – Filter Cleanliness Test Rig: Precision Testing for Fuel Filter Integrity
Explore the design, functionality, and standards compliance of our advanced Filter Cleanliness Test Rig developed according to ISO 4020-6.1. This rig is engineered to evaluate fuel filter cleanliness levels with high accuracy and repeatability—critical for ensuring the performance and durability of fuel systems.
🔬 Inside This Presentation:
Overview of ISO 4020-6.1 testing protocols
Rig components and schematic layout
Test methodology and data acquisition
Applications in automotive and industrial filtration
Key benefits: accuracy, reliability, compliance
Perfect for R&D engineers, quality assurance teams, and lab technicians focused on filtration performance and standard compliance.
🛠️ Ensure Filter Cleanliness — Validate with Confidence.
Kevin Corke Spouse Revealed A Deep Dive Into His Private Life.pdfMedicoz Clinic
Kevin Corke, a respected American journalist known for his work with Fox News, has always kept his personal life away from the spotlight. Despite his public presence, details about his spouse remain mostly private. Fans have long speculated about his marital status, but Corke chooses to maintain a clear boundary between his professional and personal life. While he occasionally shares glimpses of his family on social media, he has not publicly disclosed his wife’s identity. This deep dive into his private life reveals a man who values discretion, keeping his loved ones shielded from media attention.
May 2025: Top 10 Cited Articles in Software Engineering & Applications Intern...sebastianku31
The International Journal of Software Engineering & Applications (IJSEA) is a bi-monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Software Engineering & Applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts & establishing new collaborations in these areas.
MODULE 5 BUILDING PLANNING AND DESIGN SY BTECH ACOUSTICS SYSTEM IN BUILDINGDr. BASWESHWAR JIRWANKAR
: Introduction to Acoustics & Green Building -
Absorption of sound, various materials, Sabine’s formula, optimum reverberation time, conditions for good acoustics Sound insulation:
Acceptable noise levels, noise prevention at its source, transmission of noise, Noise control-general considerations
Green Building: Concept, Principles, Materials, Characteristics, Applications
This presentation provides a comprehensive overview of a specialized test rig designed in accordance with ISO 4548-7, the international standard for evaluating the vibration fatigue resistance of full-flow lubricating oil filters used in internal combustion engines.
Key features include:
5. What is AI?
● AI stands for Artificial Intelligence.
● Teaching computers to think like us seemed like a good idea at the time.
Spoiler alert: They’re getting pretty good at it!
● The term "Artificial Intelligence" was coined in 1956 long before the
internet.
6. What is TensorFlow?
● Imagine teaching your computer to think (or at least pretend to).
● It's the secret sauce behind everything cool Google does (like suggesting
videos of cats).
● TensorFlow has over 160,000 stars on GitHub. That's more stars than
your last science project!
7. What is TensorFlow?
1. Open-source machine learning framework developed by Google.
2. Supports deep learning, neural networks, and a wide range of machine
learning tasks.
3. Extensive community and industry support.
4. Flexibility to run on various platforms (CPUs, GPUs, TPUs).
5. Comprehensive ecosystem with tools for model building, training, and
deployment.
9. Why Google Cloud for TensorFlow?
● Easily scale from a single machine to distributed systems.
● Seamless integration with Google Cloud services like BigQuery, cloud
storage, and AI platforms. (Lego Blocks)
● Pay-as-you-go model with options to reduce costs
● Fully managed services for TensorFlow, simplify deployment and scaling.
12. TensorFlow Architecture for Model Building
1. Data Preprocessing: prepare data for the purpose of feeding it to the model
that you need to build.
2. Model Building: start model building, create your model by using various
algorithms.
3. Model training & evaluation: start training and evaluating it to check
whether it generates accurate results or not.
14. Using TensorFlow: Local vs. Cloud
Aspect TensorFlow Locally TensorFlow on Google Cloud
Setup &
Installation
Install via pip on your local
machine.
Set up using Google Cloud AI
Platform or Vertex AI.
Data Handling Load and preprocess datasets
locally.
Store and preprocess data using
Google Cloud Storage.
Model Training Train models using local
CPU/GPU resources.
Utilize Google Cloud GPUs/TPUs
for faster, scalable training.
Scalability Limited to local hardware
capabilities
Scalable with cloud resources,
including distributed training.
Flexibility Ideal for small projects or
development.
Best for large-scale, production-
level models.
Cost Free for local tasks; no cloud
costs.
Pay-as-you-go based on cloud
resource usage.
15. Setting Up TensorFlow on Google Cloud
● Create a Google Cloud Account.
● Set up a new project
● Install Google Cloud SDK.
● Launch a virtual machine (VM).
It can also be used with Python, and JavaScript.
16. Training Models on Google Cloud AI Platform
● AI Platform
○ It’s like a gym for your AI models. Let’s make them strong and smart!
● Steps to Train a Model
○ Upload your dataset—think of it as feeding your AI.
○ Write a training script—this is like planning your AI’s workout.
○ Configure the training job—select your machine (CPU? GPU? It’s like
choosing your AI’s treadmill).
○ Start the training job and cheer on your AI—go, little buddy, go!
17. Deploying TensorFlow Models with AI
Because keeping your AI to yourself is like making an amazing cake and not
sharing it. Deploy it so the world can enjoy!
● Steps to Deploy
○ Export your trained model—it’s like baking that cake.
○ Upload it to Google Cloud Storage—put it on display.
○ Deploy on AI Platform—now everyone can have a slice!
○ Use REST or gRPC APIs to interact with the deployed model.
18. Using TensorFlow Serving for Deployment
● TensorFlow Serving
○ A flexible, high-performance serving system for machine learning
models.
Imagine your AI model is a chef. TensorFlow Serving is the kitchen where
the chef works 24/7.
● Deploying on Compute Engine
○ Set up TensorFlow Serving on a Compute Engine VM - make sure your
kitchen is ready.
○ Serve the model via REST or gRPC APIs for real-time predictions —let the
chef cook and serve dishes to hungry customers!
20. Integrating TensorFlow with Google Cloud Services
● BigQuery ML:
○ Train models directly in BigQuery on large datasets without data transfer.
○ Train models on huge datasets. It’s like taking your AI to a buffet—so much
data to chew on!
● Cloud Storage:
○ Store and manage training data and model artifacts in Google Cloud Storage.
○ Think of this as the fridge where you store all your ingredients (datasets and
models).
● TensorFlow Extended (TFX):
○ TFX is the conveyor belt that automates your AI kitchen—keeping everything
running smoothly.
21. Best Practices for TensorFlow on Google Cloud
● Optimize Model Performance:
○ Use TPUs for faster training on large models.
○ Implement hyperparameter tuning with AI Platform.
○ Use TPUs—they’re like the AI equivalent of a turbo boost!
● Cost Management:
○ Monitor resource usage and costs with Google Cloud Monitoring.
○ Use preemptible VMs for cost-effective training jobs.
○ Monitor usage to avoid a “Cloud Hangover” (aka bill shock).
● Security:
○ Keep your AI safe—like not letting your pet robot wander off on its own.
23. Real-World Use Cases
Healthcare:
AI doctors—because even doctors need a break (and a second opinion).
Finance:
AI money managers—ensuring you don’t spend all your money on AI puns.
Retail:
AI shopping assistants—because shopping is more fun with a helpful robot.
#9: Scalability:
Your models can go from 0 to 100 real quick! (No, seriously, it scales automatically.)
Integration:
Think of it like Lego blocks, but for AI. Everything fits together perfectly!
Cost-Effective:
Pay only for what you use—like buying just one slice of pizza instead of the whole pie.
Managed Services:
Google does the heavy lifting. You just sit back, sip your coffee, and watch the magic happen.
#12: removing duplicate values, feature scaling. standardization, and many other tasks.
#16: Interactive Element:
Audience Guess: "What do you think happens if you skip leg day in AI training? (Hint: Your model might trip over numbers.)"
#17: Fun Analogy:
“Deploying is like opening a food truck—let the world taste your AI creation!”
#21: Interactive Element:
"Who here has ever accidentally left something important outside in the rain? (Hint: That’s why we need security!)"
#23: Interactive Element:
"Imagine your AI is running your life. What’s the first task you’d give it? (I’d start with making breakfast.)"