PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...Edureka!
( ** Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka PyTorch Tutorial (Blog: https://goo.gl/4zxMfU) will help you in understanding various important basics of PyTorch. It also includes a use-case in which we will create an image classifier that will predict the accuracy of an image data-set using PyTorch.
Below are the topics covered in this tutorial:
1. What is Deep Learning?
2. What are Neural Networks?
3. Libraries available in Python
4. What is PyTorch?
5. Use-Case of PyTorch
6. Summary
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
The release of TensorFlow 2.0 comes with a significant number of improvements over its 1.x version, all with a focus on ease of usability and a better user experience. We will give an overview of what TensorFlow 2.0 is and discuss how to get started building models from scratch using TensorFlow 2.0’s high-level api, Keras. We will walk through an example step-by-step in Python of how to build an image classifier. We will then showcase how to leverage a transfer learning to make building a model even easier! With transfer learning, we can leverage other pretrained models such as ImageNet to drastically speed up the training time of our model. TensorFlow 2.0 makes this incredibly simple to do.
This document provides an introduction to machine learning, including:
- It discusses how the human brain learns to classify images and how machine learning systems are programmed to perform similar tasks.
- It provides an example of image classification using machine learning and discusses how machines are trained on sample data and then used to classify new queries.
- It outlines some common applications of machine learning in areas like banking, biomedicine, and computer/internet applications. It also discusses popular machine learning algorithms like Bayes networks, artificial neural networks, PCA, SVM classification, and K-means clustering.
Automated machine learning (AutoML) systems can find the optimal machine learning algorithm and hyperparameters for a given dataset without human intervention. AutoML addresses the skills gap in data science by allowing data scientists to build more models in less time. On average, tuning hyperparameters results in a 5-10% improvement in accuracy over default parameters. However, the best parameters vary across problems. AutoML tools like Auto-sklearn use techniques like Bayesian optimization and meta-learning to efficiently search the hyperparameter space. Auto-sklearn has won several AutoML challenges due to its ability to effectively optimize over 100 hyperparameters.
PyTorch is an open source machine learning library that provides two main features: tensor computing with strong GPU acceleration and built-in support for deep neural networks through an autodiff tape-based system. It includes packages for optimization algorithms, neural networks, multiprocessing, utilities, and computer vision tasks. PyTorch uses an imperative programming style and defines computation graphs at runtime, compared to TensorFlow which uses both static and dynamic graphs.
In this slide I answer the basic questions about machine learning like:
What is Machine Learning?
What are the types of machine learning?
How to deal with data?
How to test model performance?
Interest in Deep Learning has been growing in the past few years. With advances in software and hardware technologies, Neural Networks are making a resurgence. With interest in AI based applications growing, and companies like IBM, Google, Microsoft, NVidia investing heavily in computing and software applications, it is time to understand Deep Learning better!
In this workshop, we will discuss the basics of Neural Networks and discuss how Deep Learning Neural networks are different from conventional Neural Network architectures. We will review a bit of mathematics that goes into building neural networks and understand the role of GPUs in Deep Learning. We will also get an introduction to Autoencoders, Convolutional Neural Networks, Recurrent Neural Networks and understand the state-of-the-art in hardware and software architectures. Functional Demos will be presented in Keras, a popular Python package with a backend in Theano and Tensorflow.
This is the basic introduction of the pandas library, you can use it for teaching this library for machine learning introduction. This slide will be able to help to understand the basics of pandas to the students with no coding background.
Using AI to build AI is a promising solution to give the power of AI to those who can't afford it as those multinational corporations. The technology is also known as Automatic Machine Learning (AutoML). OneClick.ai is the first deep learning AutoML platform that make the latest AI technology accessible to anyone with/without AI background. The deck gives a 30 minutes overview of the recent history of AutoML, and how OneClick.ai innovates on it. Check out our platform at http://www.oneclick.ai
Machine Learning: Applications, Process and TechniquesRui Pedro Paiva
Machine learning can be applied across many domains such as business, entertainment, medicine, and software engineering. The document outlines the machine learning process which includes data collection, feature extraction, model learning, and evaluation. It also provides examples of machine learning applications in various domains, such as using decision trees to make credit decisions in business, classifying emotions in music for playlist generation in entertainment, and detecting heart murmurs from audio data in medicine.
Machine learning and its applications was a gentle introduction to machine learning presented by Dr. Ganesh Neelakanta Iyer. The presentation covered an introduction to machine learning, different types of machine learning problems including classification, regression, and clustering. It also provided examples of applications of machine learning at companies like Facebook, Google, and McDonald's. The presentation concluded with discussing the general machine learning framework and steps involved in working with machine learning problems.
Machine learning and linear regression programmingSoumya Mukherjee
Overview of AI and ML
Terminology awareness
Applications in real world
Use cases within Nokia
Types of Learning
Regression
Classification
Clustering
Linear Regression Single Variable with python
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
This document is useful when use with Video session I have recorded today with execution, This is document no. 2 of course "Introduction of Data Science using Python". Which is a prerequisite of Artificial Intelligence course at Ethans Tech.
Disclaimer: Some of the Images and content have been taken from Multiple online sources and this presentation is intended only for Knowledge Sharing
Here are the key calculations:
1) Probability that persons p and q will be at the same hotel on a given day d is 1/100 × 1/100 × 10-5 = 10-9, since there are 100 hotels and each person stays in a hotel with probability 10-5 on any given day.
2) Probability that p and q will be at the same hotel on given days d1 and d2 is (10-9) × (10-9) = 10-18, since the events are independent.
NumPy is a Python library used for working with multi-dimensional arrays and matrices for scientific computing. It allows fast operations on large data sets and arrays. NumPy arrays can be created from lists or ranges of values and support element-wise operations via universal functions. NumPy is the foundation of the Python scientific computing stack and provides key features like broadcasting for efficient computations.
"Automated machine learning (AutoML) is the process of automating the end-to-end process of applying machine learning to real-world problems. In a typical machine learning application, practitioners must apply the appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods that make the dataset amenable for machine learning. Following those preprocessing steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their final machine learning model. As many of these steps are often beyond the abilities of non-experts, AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning. Automating the end-to-end process of applying machine learning offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform models that were designed by hand."
In this talk we will discuss how QuSandbox and the Model Analytics Studio can be used in the selection of machine learning models. We will also illustrate AutoML frameworks through demos and examples and show you how to get started
TensorFlow and Keras are popular deep learning frameworks. TensorFlow is an open source library for numerical computation using data flow graphs. It was developed by Google and is widely used for machine learning and deep learning. Keras is a higher-level neural network API that can run on top of TensorFlow. It focuses on user-friendliness, modularization and extensibility. Both frameworks make building and training neural networks easier through modular layers and built-in optimization algorithms.
Module 4: Model Selection and EvaluationSara Hooker
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org.
The document discusses setting up and using Keras and TensorFlow libraries for machine learning. It provides instructions on installing the libraries, preparing data, defining a model with sequential layers, compiling the model to configure the learning process, training the model on data, and evaluating the trained model on test data. A sample program is included that uses a fashion MNIST dataset to classify images into 10 categories using a simple sequential model.
The document discusses choosing machine learning algorithms for classification problems. It recommends first visualizing the dataset using a pair plot to understand the data structure. If there is high overlap between classes, logistic regression and decision trees may not be suitable due to high error rates. For highly overlapped data, K-nearest neighbors (KNN) is recommended as it uses Euclidean distance to find similarities between data points based on their neighborhoods. Other options for highly overlapped data include random forests or deeper decision trees, but they increase computational costs. The key is to understand the dataset nature and properties before selecting an algorithm.
Introduction to Matlab
Lecture 1:
Introduction: What is Matlab, History of Matlab, strengths, weakness
Getting familiar with the interface: Layout, Pull down menus
Creating and manipulating objects: Variables (scalars, vectors, matrices, text strings), Operators (arithmetic, relational, logical) and built-in functions
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Basic concept of Deep Learning with explaining its structure and backpropagation method and understanding autograd in PyTorch. (+ Data parallism in PyTorch)
Introduction to Pandas and Time Series Analysis [PyCon DE]Alexander Hendorf
Most data is allocated to a period or to some point in time. We can gain a lot of insight by analyzing what happened when. The better the quality and accuracy of our data, the better our predictions can become.
Unfortunately the data we have to deal with is often aggregated for example on a monthly basis, but not all months are the same, they may have 28 days, 31 days, have four or five weekends,…. It’s made fit to our calendar that was made fit to deal with the earth surrounding the sun, not to please Data Scientists.
Dealing with periodical data can be a challenge. This talk will show to how you can deal with it with Pandas.
Python is often a choice for development that needs to be applied for census and data analysis to work, or data scientists whose work should be integrated into web applications or the production environment. In particular, python actually looks at the learning point of the machine. The combination of python's teaching and library libraries makes it particularly suited to develop modern lenses and predecessors forecasts directly connected to the production process.
Data science training in Chennai.
Are you interested
Call now:+91 996 252 8294
Automated machine learning lectures given at the Advanced Course on Data Science & Machine Learning. AutoML, hyperparameter optimization, Bayesian optimization, Neural Architecture Search, Meta-learning, MAML
Distilled PyTorch tutorial. Also in text at my blog - https://towardsdatascience.com/pytorch-tutorial-distilled-95ce8781a89c
Using AI to build AI is a promising solution to give the power of AI to those who can't afford it as those multinational corporations. The technology is also known as Automatic Machine Learning (AutoML). OneClick.ai is the first deep learning AutoML platform that make the latest AI technology accessible to anyone with/without AI background. The deck gives a 30 minutes overview of the recent history of AutoML, and how OneClick.ai innovates on it. Check out our platform at http://www.oneclick.ai
Machine Learning: Applications, Process and TechniquesRui Pedro Paiva
Machine learning can be applied across many domains such as business, entertainment, medicine, and software engineering. The document outlines the machine learning process which includes data collection, feature extraction, model learning, and evaluation. It also provides examples of machine learning applications in various domains, such as using decision trees to make credit decisions in business, classifying emotions in music for playlist generation in entertainment, and detecting heart murmurs from audio data in medicine.
Machine learning and its applications was a gentle introduction to machine learning presented by Dr. Ganesh Neelakanta Iyer. The presentation covered an introduction to machine learning, different types of machine learning problems including classification, regression, and clustering. It also provided examples of applications of machine learning at companies like Facebook, Google, and McDonald's. The presentation concluded with discussing the general machine learning framework and steps involved in working with machine learning problems.
Machine learning and linear regression programmingSoumya Mukherjee
Overview of AI and ML
Terminology awareness
Applications in real world
Use cases within Nokia
Types of Learning
Regression
Classification
Clustering
Linear Regression Single Variable with python
NVIDIA compute GPUs and software toolkits are key drivers behind major advancements in machine learning. Of particular interest is a technique called "deep learning", which utilizes what are known as Convolution Neural Networks (CNNs) having landslide success in computer vision and widespread adoption in a variety of fields such as autonomous vehicles, cyber security, and healthcare. In this talk is presented a high level introduction to deep learning where we discuss core concepts, success stories, and relevant use cases. Additionally, we will provide an overview of essential frameworks and workflows for deep learning. Finally, we explore emerging domains for GPU computing such as large-scale graph analytics, in-memory databases.
https://tech.rakuten.co.jp/
This document is useful when use with Video session I have recorded today with execution, This is document no. 2 of course "Introduction of Data Science using Python". Which is a prerequisite of Artificial Intelligence course at Ethans Tech.
Disclaimer: Some of the Images and content have been taken from Multiple online sources and this presentation is intended only for Knowledge Sharing
Here are the key calculations:
1) Probability that persons p and q will be at the same hotel on a given day d is 1/100 × 1/100 × 10-5 = 10-9, since there are 100 hotels and each person stays in a hotel with probability 10-5 on any given day.
2) Probability that p and q will be at the same hotel on given days d1 and d2 is (10-9) × (10-9) = 10-18, since the events are independent.
NumPy is a Python library used for working with multi-dimensional arrays and matrices for scientific computing. It allows fast operations on large data sets and arrays. NumPy arrays can be created from lists or ranges of values and support element-wise operations via universal functions. NumPy is the foundation of the Python scientific computing stack and provides key features like broadcasting for efficient computations.
"Automated machine learning (AutoML) is the process of automating the end-to-end process of applying machine learning to real-world problems. In a typical machine learning application, practitioners must apply the appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods that make the dataset amenable for machine learning. Following those preprocessing steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their final machine learning model. As many of these steps are often beyond the abilities of non-experts, AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning. Automating the end-to-end process of applying machine learning offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform models that were designed by hand."
In this talk we will discuss how QuSandbox and the Model Analytics Studio can be used in the selection of machine learning models. We will also illustrate AutoML frameworks through demos and examples and show you how to get started
TensorFlow and Keras are popular deep learning frameworks. TensorFlow is an open source library for numerical computation using data flow graphs. It was developed by Google and is widely used for machine learning and deep learning. Keras is a higher-level neural network API that can run on top of TensorFlow. It focuses on user-friendliness, modularization and extensibility. Both frameworks make building and training neural networks easier through modular layers and built-in optimization algorithms.
Module 4: Model Selection and EvaluationSara Hooker
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org.
The document discusses setting up and using Keras and TensorFlow libraries for machine learning. It provides instructions on installing the libraries, preparing data, defining a model with sequential layers, compiling the model to configure the learning process, training the model on data, and evaluating the trained model on test data. A sample program is included that uses a fashion MNIST dataset to classify images into 10 categories using a simple sequential model.
The document discusses choosing machine learning algorithms for classification problems. It recommends first visualizing the dataset using a pair plot to understand the data structure. If there is high overlap between classes, logistic regression and decision trees may not be suitable due to high error rates. For highly overlapped data, K-nearest neighbors (KNN) is recommended as it uses Euclidean distance to find similarities between data points based on their neighborhoods. Other options for highly overlapped data include random forests or deeper decision trees, but they increase computational costs. The key is to understand the dataset nature and properties before selecting an algorithm.
Introduction to Matlab
Lecture 1:
Introduction: What is Matlab, History of Matlab, strengths, weakness
Getting familiar with the interface: Layout, Pull down menus
Creating and manipulating objects: Variables (scalars, vectors, matrices, text strings), Operators (arithmetic, relational, logical) and built-in functions
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Basic concept of Deep Learning with explaining its structure and backpropagation method and understanding autograd in PyTorch. (+ Data parallism in PyTorch)
Introduction to Pandas and Time Series Analysis [PyCon DE]Alexander Hendorf
Most data is allocated to a period or to some point in time. We can gain a lot of insight by analyzing what happened when. The better the quality and accuracy of our data, the better our predictions can become.
Unfortunately the data we have to deal with is often aggregated for example on a monthly basis, but not all months are the same, they may have 28 days, 31 days, have four or five weekends,…. It’s made fit to our calendar that was made fit to deal with the earth surrounding the sun, not to please Data Scientists.
Dealing with periodical data can be a challenge. This talk will show to how you can deal with it with Pandas.
Python is often a choice for development that needs to be applied for census and data analysis to work, or data scientists whose work should be integrated into web applications or the production environment. In particular, python actually looks at the learning point of the machine. The combination of python's teaching and library libraries makes it particularly suited to develop modern lenses and predecessors forecasts directly connected to the production process.
Data science training in Chennai.
Are you interested
Call now:+91 996 252 8294
Automated machine learning lectures given at the Advanced Course on Data Science & Machine Learning. AutoML, hyperparameter optimization, Bayesian optimization, Neural Architecture Search, Meta-learning, MAML
Distilled PyTorch tutorial. Also in text at my blog - https://towardsdatascience.com/pytorch-tutorial-distilled-95ce8781a89c
PyTorch is one of the most widely used deep learning library in python community. In this talk I will cover the basic to advanced guide to implement deep learning model using PyTorch. My goal is to introduce PyTorch and show how to use it for deep learning project.
PyTorch constructs dynamic computational graphs that allow for maximum flexibility and speed for deep learning research. Dynamic graphs are useful when the computation cannot be fully determined ahead of time, as they allow the graph to change on each iteration based on variable data. This makes PyTorch well-suited for problems with dynamic or variable sized inputs. While static graphs can optimize computation, dynamic graphs are easier to debug and create extensions for. PyTorch aims to be a simple and intuitive platform for neural network programming and research.
Introduction of PyTorch
Explains PyTorch usages by a CNN example.
Describes the PyTorch modules (torch, torch.nn, torch.optim, etc) and the usages of multi-GPU processing.
Also gives examples for Recurrent Neural Network and Transfer Learning.
This document provides an overview and tutorial for PyTorch, a popular deep learning framework developed by Facebook. It discusses what PyTorch is, how to install it, its core packages and concepts like tensors, variables, neural network modules, and optimization. The tutorial also outlines how to define neural network modules in PyTorch, build a network, and describes common layer types like convolution and linear layers. It explains key PyTorch concepts such as defining modules, building networks, and how tensors and variables are used to represent data and enable automatic differentiation for training models.
PyTorch is an open-source machine learning framework popular for flexibility and ease-of-use. It is built on Python and supports neural networks using tensors as the primary data structure. Key features include tensor computation, automatic differentiation for training networks, and dynamic graph computation. PyTorch is used for applications like computer vision, natural language processing, and research due to its flexibility and Python integration. Major companies like Facebook, Uber, and Salesforce use PyTorch for machine learning tasks.
PyTorch is an open source machine learning library based on Python and Tensors. It allows for easy GPU acceleration and automatic differentiation. PyTorch uses dynamic computational graphs which are built during execution, unlike static graphs in other frameworks. Tensors in PyTorch are similar to NumPy ndarrays and support operations like concatenation and reshaping. The autograd package allows for automatic differentiation to calculate gradients for training models. Datasets and DataLoaders provide easy batching of data for training. Modules are used to define neural network layers and store weights. Models are trained by calculating loss on mini-batches and calling backward to update weights through gradient descent.
The document provides an overview and agenda for an introduction to running AI workloads on PowerAI. It discusses PowerAI and how it combines popular deep learning frameworks, development tools, and accelerated IBM Power servers. It then demonstrates AI workloads using TensorFlow and PyTorch, including running an MNIST workload to classify handwritten digits using basic linear regression and convolutional neural networks in TensorFlow, and an introduction to PyTorch concepts like tensors, modules, and softmax cross entropy loss.
This document provides an agenda for an introduction to running AI workloads on PowerAI. It includes:
- An overview of IBM PowerAI and demos of AI workloads using TensorFlow and PyTorch hands-on labs.
- A demonstration of running the MNIST workload using TensorFlow to classify handwritten digits, including downloading the workload, training a basic model, and predicting classes of new images.
- An introduction to PyTorch, describing it as a flexible deep learning framework that supports dynamic computation graphs, native Python packages, and automatic differentiation.
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...Databricks
We all know what they say – the bigger the data, the better. But when the data gets really big, how do you mine it and what deep learning framework to use? This talk will survey, with a developer’s perspective, three of the most popular deep learning frameworks—TensorFlow, Keras, and PyTorch—as well as when to use their distributed implementations.
We’ll compare code samples from each framework and discuss their integration with distributed computing engines such as Apache Spark (which can handle massive amounts of data) as well as help you answer questions such as:
As a developer how do I pick the right deep learning framework?
Do I want to develop my own model or should I employ an existing one?
How do I strike a trade-off between productivity and control through low-level APIs?
What language should I choose?
In this session, we will explore how to build a deep learning application with Tensorflow, Keras, or PyTorch in under 30 minutes. After this session, you will walk away with the confidence to evaluate which framework is best for you.
Reproducible AI using MLflow and PyTorchDatabricks
Model reproducibility is becoming the next frontier for successful AI models building and deployments for both Research and Production scenarios. In this talk, we will show you how to build reproducible AI models and workflows using PyTorch and MLflow that can be shared across your teams, with traceability and speed up collaboration for AI projects.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2019-alliance-vitf-facebook
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Joseph Spisak, Product Manager at Facebook, delivers the presentation "PyTorch Deep Learning Framework: Status and Directions" at the Embedded Vision Alliance's December 2019 Vision Industry and Technology Forum. Spisak gives an update on the Torch deep learning framework and where it’s heading.
Membangun Aplikasi Web dengan Elixir dan PhoenixRiza Fahmi
Dokumen tersebut membahas tentang bahasa pemrograman Elixir dan framework Phoenix. Elixir adalah bahasa fungsional dan dinamis yang berjalan pada BEAM, sedangkan Phoenix adalah framework modern untuk membangun aplikasi web yang menerapkan MVC. Keduanya memiliki fitur-fitur seperti LiveView, PubSub dan Channel yang mendukung pengembangan aplikasi real-time.
Memilih karir sebagai developer adalah hal yang tepat untuk saat ini. Namun banyaknya pilihan terkadang bisa membuat kita bimbang. Apa saja pilihan yang kita punya? Slide presentasi berikut memaparkan beberapa diantaranya.
- Membahas perkembangan web dan aplikasi web
- Kelebihan dan kekurangan mengembangkan aplikasi untuk web platform
- Tentang PWA
- Komponen utama PWA
- Masa depan platform web
Remote working provides opportunities for flexible work arrangements but also presents challenges. It requires strong self-management and communication skills. Preparing for remote work involves treating yourself as the manager by setting goals and priorities. Communication is key, both synchronous and asynchronous. Remote workers must also find ways to maintain a healthy work-life balance and avoid isolation through community engagement and physical activity. Overall, remote work allows accessing new opportunities if one adapts to its requirements for independence and communication.
This document provides tips for how to learn programming. It emphasizes that programming is difficult but not impossible, and encourages finding problems to solve. It recommends finding your learning style, improving foundational skills, narrowing your focus, getting excited, and just coding. It also suggests getting help from others, being persistent, and developing a passion for programming in order to succeed.
AWS Amplify is a development platform that allows developers to build secure and scalable mobile and web applications quickly. It provides tools and services for continuous integration, hosting, functions, authentication, real-time data stores, serverless APIs, analytics, push notifications and more. Developers can get started by installing the Amplify CLI, initializing a project, and then adding features like authentication, storage, analytics and more through commands. The document demonstrates setting up and building a meetup app with features like user authentication and a serverless API in under 15 minutes using Amplify.
Pernah bingung gimana caranya sebuah framework menghasilkan sebuah kode output yang berbeda dengan kode yang kamu tulis? Penasaran sebetulnya apa yang terjadi dibalik layar dan gimana caranya kamu bisa belajar dari hal itu untuk memperbaiki kode yang kamu hasilkan?
MVP development from software developer perspectiveRiza Fahmi
The document discusses what a minimum viable product (MVP) is and why and how to build one. It defines an MVP as the simplest version of a product that allows customers to be tested with minimum effort. Building an MVP focuses development on the core idea, allows for early testing and feedback, takes less time and resources. The document provides tips for building an MVP such as building less features, fixing budgets/time while flexing scope, releasing something daily, and breaking work into small tasks. Examples of successful MVPs from companies like Facebook, Dropbox, Amazon are also included.
The document provides steps to generate business ideas:
1. Find profitable ideas by identifying problems faced by yourself, friends, family or copying existing ideas. Validate ideas by talking to potential customers and ask them to pay or provide their email.
2. Build the idea with minimal execution and focus on scaling down. Find partners to help build the idea or learn skills yourself if needed. Make time for the business by shortening breaks or reducing sleep.
3. The key is to just get started by focusing on building less initially and validating customers as soon as possible. Execution is important for turning ideas into profits.
Strategi Presentasi Untuk Developer Workshop SlideRiza Fahmi
Selain ngoding, skill komunikasi juga merupakan hal cukup penting yang sebaiknya dipunyai oleh developer. Salah satu bentuknya adalah berbicara didepan publik. Apakah itu didepan atasan rekan satu tim, meetup, client, conference ataupun didepan investor. Dengan kemampuan ini, kita akan menjadi seorang developer yang berbeda. Selain bisa menyelesaikan masalah dengan kode kita juga bisa mengkomunikasikannya. Itu adalah nilai tambah yang sangat besar dan banyak dicari. Dan cara presentasi kita, sebagai developer tentu berbeda dengan orang marketing misalnya atau business man. Karena itu kita juga butuh strategi presentasi khusus. Dan di workshop ini, kita akan mempelajari dan mempraktekkan beberapa strategi tersebut.
Dokumen tersebut berisi nasihat dan pandangan dari beberapa insinyur perangkat lunak dan praktisi teknologi tentang mengapa mereka memilih karier sebagai pengembang perangkat lunak, tips untuk pengembang pemula, dan pahlawan-pahlawan mereka sebagai inspirasi. Beberapa poin kunci yang diangkat antara lain pentingnya memperkuat dasar-dasar pemrograman, terus belajar dari komunitas, dan berkontribusi ke proyek
This document discusses best practices for writing clean JavaScript code that is readable, reusable, and refactorable. It defines clean code as code that is easy for humans to read, understand, and modify. The document provides guidelines for writing clean code, such as using meaningful variable and function names, separating concerns in functions, avoiding side effects, and using async/await instead of callbacks. It also recommends tools like ESLint for linting, Prettier for formatting, and Husky with lint-staged for enforcing code quality in Git commits.
The document discusses the history and future of artificial intelligence. It provides a timeline of important developments in AI from the 13th century to present day. These developments include early machines created by philosophers, the concept of the programmable computer by Alan Turing in 1936, and recent advances in deep learning. The document then discusses the growth of AI technologies and their increasing impact on various fields like transportation, healthcare, marketing, finance, human resources, and education. It emphasizes the importance of skills like data science and encourages readers to learn about AI and prepare for careers working with emerging technologies.
Chrome Dev Summit 2018 - Personal Take AwaysRiza Fahmi
The document summarizes key takeaways from the Chrome Dev Summit. Day 1 focused on improvements to the V8 JavaScript engine, including a faster garbage collector and better support for modern JavaScript and WebAssembly. Day 2 discussed framework optimizations for performance and the potential of the actor model and Houdini APIs to further improve web application speed. WebAssembly was also covered as a new language that can be compiled to performant web code.
GatsbyJS is a site generator that allows you to build modern, fast and secure apps and websites using React, GraphQL, and other tools. It focuses on developer experience with batteries included and features like hot reloading. Popular sites using Gatsby include reactjs.org, airbnb.io, and figma.com. Gatsby gets data from various sources and delivers sites via services like S3, Netlify, and GitHub Pages. Developers can install Gatsby globally, generate a new Gatsby site, and develop locally while previewing changes in real time.
ReasonML is a syntax and toolchain that brings OCaml to the web by compiling to JavaScript. It uses BuckleScript to compile Reason code to readable, optimized JS. Reason code looks similar to OCaml but with a modernized syntax that is friendlier for web development. It provides type safety through static typing and type inference while allowing for easy interoperability with JavaScript. The tooling includes package management, bundling, formatting, linting and more. ReasonReact provides a React-like experience for building user interfaces with ReasonML.
6th Power Grid Model Meetup
Join the Power Grid Model community for an exciting day of sharing experiences, learning from each other, planning, and collaborating.
This hybrid in-person/online event will include a full day agenda, with the opportunity to socialize afterwards for in-person attendees.
If you have a hackathon proposal, tell us when you register!
About Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
Cyber Security Legal Framework in Nepal.pptxGhimire B.R.
The presentation is about the review of existing legal framework on Cyber Security in Nepal. The strength and weakness highlights of the major acts and policies so far. Further it highlights the needs of data protection act .
Introduction and Background:
Study Overview and Methodology: The study analyzes the IT market in Israel, covering over 160 markets and 760 companies/products/services. It includes vendor rankings, IT budgets, and trends from 2025-2029. Vendors participate in detailed briefings and surveys.
Vendor Listings: The presentation lists numerous vendors across various pages, detailing their names and services. These vendors are ranked based on their participation and market presence.
Market Insights and Trends: Key insights include IT market forecasts, economic factors affecting IT budgets, and the impact of AI on enterprise IT. The study highlights the importance of AI integration and the concept of creative destruction.
Agentic AI and Future Predictions: Agentic AI is expected to transform human-agent collaboration, with AI systems understanding context and orchestrating complex processes. Future predictions include AI's role in shopping and enterprise IT.
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...James Anderson
The Quantum Apocalypse: A Looming Threat & The Need for Post-Quantum Encryption
We explore the imminent risks posed by quantum computing to modern encryption standards and the urgent need for post-quantum cryptography (PQC).
Bio: With 30 years in cybersecurity, including as a CISO, Tommy is a strategic leader driving security transformation, risk management, and program maturity. He has led high-performing teams, shaped industry policies, and advised organizations on complex cyber, compliance, and data protection challenges.
nnual (33 years) study of the Israeli Enterprise / public IT market. Covering sections on Israeli Economy, IT trends 2026-28, several surveys (AI, CDOs, OCIO, CTO, staffing cyber, operations and infra) plus rankings of 760 vendors on 160 markets (market sizes and trends) and comparison of products according to support and market penetration.
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
Improving Developer Productivity With DORA, SPACE, and DevExJustin Reock
Ready to measure and improve developer productivity in your organization?
Join Justin Reock, Deputy CTO at DX, for an interactive session where you'll learn actionable strategies to measure and increase engineering performance.
Leave this session equipped with a comprehensive understanding of developer productivity and a roadmap to create a high-performing engineering team in your company.
Introducing FME Realize: A New Era of Spatial Computing and ARSafe Software
A new era for the FME Platform has arrived – and it’s taking data into the real world.
Meet FME Realize: marking a new chapter in how organizations connect digital information with the physical environment around them. With the addition of FME Realize, FME has evolved into an All-data, Any-AI Spatial Computing Platform.
FME Realize brings spatial computing, augmented reality (AR), and the full power of FME to mobile teams: making it easy to visualize, interact with, and update data right in the field. From infrastructure management to asset inspections, you can put any data into real-world context, instantly.
Join us to discover how spatial computing, powered by FME, enables digital twins, AI-driven insights, and real-time field interactions: all through an intuitive no-code experience.
In this one-hour webinar, you’ll:
-Explore what FME Realize includes and how it fits into the FME Platform
-Learn how to deliver real-time AR experiences, fast
-See how FME enables live, contextual interactions with enterprise data across systems
-See demos, including ones you can try yourself
-Get tutorials and downloadable resources to help you start right away
Whether you’re exploring spatial computing for the first time or looking to scale AR across your organization, this session will give you the tools and insights to get started with confidence.
Exploring the advantages of on-premises Dell PowerEdge servers with AMD EPYC processors vs. the cloud for small to medium businesses’ AI workloads
AI initiatives can bring tremendous value to your business, but you need to support your new AI workloads effectively. That means choosing the best possible infrastructure for your needs—and many companies are finding that the cloud isn’t right for them. According to a recent Rackspace survey of IT executives, 69 percent of companies have moved some of their applications on-premises from the cloud, with half of those citing security and compliance as the reason and 44 percent citing cost.
On-premises solutions provide a number of advantages. With full control over your security infrastructure, you can be certain that all compliance requirements remain firmly in the hands of your IT team. Opting for on-premises also gives you the ability to design your infrastructure to the precise needs of that team and your new AI workloads. Depending on the workload, you may also see performance benefits, along with more predictable costs. As you start to build your next AI initiative, consider an on-premises solution utilizing AMD EPYC processor-powered Dell PowerEdge servers.
Protecting Your Sensitive Data with Microsoft Purview - IRMS 2025Nikki Chapple
Session | Protecting Your Sensitive Data with Microsoft Purview: Practical Information Protection and DLP Strategies
Presenter | Nikki Chapple (MVP| Principal Cloud Architect CloudWay) & Ryan John Murphy (Microsoft)
Event | IRMS Conference 2025
Format | Birmingham UK
Date | 18-20 May 2025
In this closing keynote session from the IRMS Conference 2025, Nikki Chapple and Ryan John Murphy deliver a compelling and practical guide to data protection, compliance, and information governance using Microsoft Purview. As organizations generate over 2 billion pieces of content daily in Microsoft 365, the need for robust data classification, sensitivity labeling, and Data Loss Prevention (DLP) has never been more urgent.
This session addresses the growing challenge of managing unstructured data, with 73% of sensitive content remaining undiscovered and unclassified. Using a mountaineering metaphor, the speakers introduce the “Secure by Default” blueprint—a four-phase maturity model designed to help organizations scale their data security journey with confidence, clarity, and control.
🔐 Key Topics and Microsoft 365 Security Features Covered:
Microsoft Purview Information Protection and DLP
Sensitivity labels, auto-labeling, and adaptive protection
Data discovery, classification, and content labeling
DLP for both labeled and unlabeled content
SharePoint Advanced Management for workspace governance
Microsoft 365 compliance center best practices
Real-world case study: reducing 42 sensitivity labels to 4 parent labels
Empowering users through training, change management, and adoption strategies
🧭 The Secure by Default Path – Microsoft Purview Maturity Model:
Foundational – Apply default sensitivity labels at content creation; train users to manage exceptions; implement DLP for labeled content.
Managed – Focus on crown jewel data; use client-side auto-labeling; apply DLP to unlabeled content; enable adaptive protection.
Optimized – Auto-label historical content; simulate and test policies; use advanced classifiers to identify sensitive data at scale.
Strategic – Conduct operational reviews; identify new labeling scenarios; implement workspace governance using SharePoint Advanced Management.
🎒 Top Takeaways for Information Management Professionals:
Start secure. Stay protected. Expand with purpose.
Simplify your sensitivity label taxonomy for better adoption.
Train your users—they are your first line of defense.
Don’t wait for perfection—start small and iterate fast.
Align your data protection strategy with business goals and regulatory requirements.
💡 Who Should Watch This Presentation?
This session is ideal for compliance officers, IT administrators, records managers, data protection officers (DPOs), security architects, and Microsoft 365 governance leads. Whether you're in the public sector, financial services, healthcare, or education.
🔗 Read the blog: https://nikkichapple.com/irms-conference-2025/
Create Your First AI Agent with UiPath Agent BuilderDianaGray10
Join us for an exciting virtual event where you'll learn how to create your first AI Agent using UiPath Agent Builder. This session will cover everything you need to know about what an agent is and how easy it is to create one using the powerful AI-driven UiPath platform. You'll also discover the steps to successfully publish your AI agent. This is a wonderful opportunity for beginners and enthusiasts to gain hands-on insights and kickstart their journey in AI-powered automation.
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....Jasper Oosterveld
Sensitivity labels, powered by Microsoft Purview Information Protection, serve as the foundation for classifying and protecting your sensitive data within Microsoft 365. Their importance extends beyond classification and play a crucial role in enforcing governance policies across your Microsoft 365 environment. Join me, a Data Security Consultant and Microsoft MVP, as I share practical tips and tricks to get the full potential of sensitivity labels. I discuss sensitive information types, automatic labeling, and seamless integration with Data Loss Prevention, Teams Premium, and Microsoft 365 Copilot.
Adtran’s SDG 9000 Series brings high-performance, cloud-managed Wi-Fi 7 to homes, businesses and public spaces. Built on a unified SmartOS platform, the portfolio includes outdoor access points, ceiling-mount APs and a 10G PoE router. Intellifi and Mosaic One simplify deployment, deliver AI-driven insights and unlock powerful new revenue streams for service providers.
5. Scientific computing package to replace NumPy to use
the power of GPU.
A deep learning research platform that provide
maximum flexibility and speed
6. A complete Python rewrite of Machine Learning library
called Torch, written in Lua
Chainer — Deep learning library, huge in NLP
community, big inspiration to PyTorch Team
HIPS Autograd - Automatic differentiation library,
become on of big feature of PyTorch
In need of dynamic execution
7. January 2017
PyTorch was born 🍼
July 2017
Kaggle Data Science Bowl won using PyTorch 🎉
August 2017
PyTorch 0.2 🚢
September 2017
fast.ai switch to PyTorch 🚀
October 2017
SalesForce releases QRNN 🖖
November 2017
Uber releases Pyro 🚗
December 2017
PyTorch 0.3 release! 🛳
2017 in review
9. Killer Features
Just Python
On Steroid
Dynamic computation allows flexibility
of input
Best suited for research and
prototyping
10. Summary
• PyTorch is Python machine learning library focus
on research purposes
• Released on January 2017, used by tech
companies and universities
• Dynamic and pythonic way to do machine
learning
21. Operators
import torch
x = torch.Tensor(5, 3)
# Randomize Tensor
y = torch.rand(5, 3)
# Add
print(x + y) # or
print(torch.add(x, y))
# Matrix Multiplication
a = torch.randn(2, 3)
b = torch.randn(3, 3)
print(torch.mm(a, b))
https://pytorch.org/docs/stable/tensors.html
22. Working With
import torch
a = torch.ones(5)
print(a) # tensor([1., 1., 1., 1., 1.])
b = a.numpy()
print(b) # [1. 1. 1. 1. 1.]
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a) "#[2. 2. 2. 2. 2.]
print(b)
#tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
23. Working With GPU
import torch
x = torch.Tensor(5, 3)
y = torch.rand(5, 3)
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
x + y
24. Summary
• Tensor is like rubiks or multidimentional array
• Scalar, Vector, Matrix and Tensor is the same
with different dimension
• We can use torch.Tensor() to create a
tensor.
26. Differen7a7on
Refresher
y = f(x) = 2xIF THEN
dy
dx
= 2
IF THENy = f(x1, x2,…,xn) [
dy
dx1
,
dy
dx2
, . . . ,
dy
dxn
]
Is the gradient of y w.r.t [x1, x2, …, xn]
27. Autograd
• Calculus chain rule on steroid
• Derivative of function within a function
• Complex functions can be written as many
compositions of simple functions
• Provides auto differentiation on all tensor
operations
• In torch.autograd module
28. Variable
• Crucial data structure, needed for automatic
differentiation
• Wrapper around Tensor
• Records reference to the creator function
29. Variable
import torch
from torch.autograd import Variable
x = Variable(torch.FloatTensor([11.2]),
requires_grad=True)
y = 2 * x
print(x)
# tensor([11.2000], requires_grad=True)
print(y)
# tensor([22.4000], grad_fn=<MulBackward>)
print(x.data) # tensor([11.2000])
print(y.data) # tensor([22.4000])
print(x.grad_fn) # None
print(y.grad_fn)
# <MulBackward object at 0x10ae58e48>
y.backward() # Calculates the gradients
print(x.grad) # tensor([2.])
30. Summary
• Autograd provides auto differentiation on all tensor
operations, inside torch.autograd module
• Variable is wrapper around Tensor that will records
reference to the creator function
41. Access Dataset
Iterate data and train the model
for i, data in enumerate(trainloader):
data, labels = data
print(type(data)) # <class 'torch.Tensor'>
print(data.size()) # torch.Size([10, 3, 32, 32])
print(type(labels)) # <class 'torch.Tensor'>
print(labels.size()) # torch.Size([10])
# Model training happens here""...
43. Summary
• We can use existing dataset provided by torch and
torchvision such as CIFAR10
• Dataset is training example, epochs is on pass
througout the model, batch is subset of training
model and iteration is a single pass of one batch
50. Summary
• Neural net is a collection of neurons that related to
each other consist of input, weight, bias and output.
• To generate an output we need to activate it using
activation function such as Sigmoid, tanh or ReLU.
55. The Iris
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
from data import iris
Import things
56. The Iris
class IrisNet(nn.Module):
def "__init"__(self, input_size,
hidden1_size, hidden2_size, num_classes):
super(IrisNet, self)."__init"__()
self.layer1 = nn.Linear(input_size, hidden1_size)
self.act1 = nn.ReLU()
self.layer2 = nn.Linear(hidden1_size, hidden2_size)
self.act2 = nn.ReLU()
self.layer3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.act1(out)
out = self.layer2(out)
out = self.act2(out)
out = self.layer3(out)
return out
model = IrisNet(4, 100, 50, 3)
print(model)
Create Module and Instance
59. Loss Func7on
In PyTorch
• L1Loss
• MSELoss
• CrossEntropyLoss
• BCELoss
• SoftMarginLoss
• More: https://pytorch.org/docs/stable/nn.html?#loss-functions
60. Loss Func7on
CrossEntropyLoss
Measures the performance of a classification model
whose output is a probability value between 0 and 1.
🍎 🍌 🍍
Prediction 0.02 0.88 0.1 Actual
🍌
Loss Score 0.98 0.12 0.9
63. Back to Iris
Let’s Train The Neural Network
net = IrisNet(4, 100, 50, 3)
# Loss Function
criterion = nn.CrossEntropyLoss()
# Optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(net.parameters(),
lr=learning_rate,
nesterov=True,
momentum=0.9,
dampening=0)
64. Back to Iris
Let’s Train The Neural Network
num_epochs = 500
for epoch in range(num_epochs):
train_correct = 0
train_total = 0
for i, (items, classes) in enumerate(train_loader):
# Convert torch tensor to Variable
items = Variable(items)
classes = Variable(classes)
65. Back to Iris
Let’s Train The Neural Network
net.train() # Training mode
optimizer.zero_grad() # Reset gradients from past operation
outputs = net(items) # Forward pass
loss = criterion(outputs, classes) # Calculate the loss
loss.backward() # Calculate the gradient
optimizer.step() # Adjust weight based on gradients
train_total += classes.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted "== classes.data).sum()
print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f'
%(epoch+1, num_epochs, i+1,
len(train_ds)"//batch_size, loss.data[0]))
66. Back to Iris
Let’s Train The Neural Network
net.eval() # Put the network into evaluation mode
train_loss.append(loss.data[0])
train_accuracy.append((100 * train_correct / train_total))
# Record the testing loss
test_items = torch.FloatTensor(test_ds.data.values[:, 0:4])
test_classes = torch.LongTensor(test_ds.data.values[:, 4])
outputs = net(Variable(test_items))
loss = criterion(outputs, Variable(test_classes))
test_loss.append(loss.data[0])
# Record the testing accuracy
_, predicted = torch.max(outputs.data, 1)
total = test_classes.size(0)
correct = (predicted "== test_classes).sum()
test_accuracy.append((100 * correct / total))
67. Back to Iris
Let’s Train The Neural Network
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
from data import iris
# Create the module
class IrisNet(nn.Module):
def "__init"__(self, input_size, hidden1_size, hidden2_size, num_classes):
super(IrisNet, self)."__init"__()
self.layer1 = nn.Linear(input_size, hidden1_size)
self.act1 = nn.ReLU()
self.layer2 = nn.Linear(hidden1_size, hidden2_size)
self.act2 = nn.ReLU()
self.layer3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.act1(out)
out = self.layer2(out)
out = self.act2(out)
out = self.layer3(out)
return out
# Create a model instance
model = IrisNet(4, 100, 50, 3)
print(model)
# Create the DataLoader
batch_size = 60
iris_data_file = 'data/iris.data.txt'
train_ds, test_ds = iris.get_datasets(iris_data_file)
print('# instances in training set: ', len(train_ds))
print('# instances in testing/validation set: ', len(test_ds))
train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_ds, batch_size=batch_size, shuffle=True)
# Model
net = IrisNet(4, 100, 50, 3)
# Loss Function
criterion = nn.CrossEntropyLoss()
# Optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(net.parameters(),
lr=learning_rate,
nesterov=True,
momentum=0.9,
dampening=0)
# Training iteration
num_epochs = 500
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
for epoch in range(num_epochs):
train_correct = 0
train_total = 0
for i, (items, classes) in enumerate(train_loader):
# Convert torch tensor to Variable
items = Variable(items)
classes = Variable(classes)
net.train() # Training mode
optimizer.zero_grad() # Reset gradients from past operation
outputs = net(items) # Forward pass
loss = criterion(outputs, classes) # Calculate the loss
loss.backward() # Calculate the gradient
optimizer.step() # Adjust weight/parameter based on gradients
train_total += classes.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted "== classes.data).sum()
print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_ds)"//batch_size, loss.data[0]))
net.eval() # Put the network into evaluation mode
train_loss.append(loss.data[0])
train_accuracy.append((100 * train_correct / train_total))
# Record the testing loss
test_items = torch.FloatTensor(test_ds.data.values[:, 0:4])
test_classes = torch.LongTensor(test_ds.data.values[:, 4])
outputs = net(Variable(test_items))
loss = criterion(outputs, Variable(test_classes))
test_loss.append(loss.data[0])
# Record the testing accuracy
_, predicted = torch.max(outputs.data, 1)
total = test_classes.size(0)
correct = (predicted "== test_classes).sum()
test_accuracy.append((100 * correct / total))
69. Summary
• Created feed forward neural network to predict a
type of flower
• Start from read the dataset and dataloader
• Choose a loss function and optimizer
• Train and evaluation