0% found this document useful (0 votes)
15 views

The First Artificial Neuron

The first Artificial Neuron

Uploaded by

ram419906
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

The First Artificial Neuron

The first Artificial Neuron

Uploaded by

ram419906
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

The first Artificial Neuron: It was in year 1943, Artificial neuron was introduced byNeurophysiologist

Warren McCulloh and Mathematician Walter Pitts They have published their work in McCulloch, W.S.,
Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics
5, 115–133 (1943). https://doi.org/10.1007/BF02478259 . read full paper at link
(https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf) They have shown
that these simple neurons can perform small logical operation like OR, NOT, AND gate etc. These neuron
only fires when they get two active inputs. Deep Learning classification: - (1) ANN --> Artificial Neural
Network. This ANN works on all kinds of Tabular dataset (excel sheet data). Any kinds of Regression &
Classification task of Machine Learning can be solved by ANN. (2) CNN --> Convolutional Neural
Network. CNN works on computer vision like Images, Videos. CNN architecture works well on these
kinds of Image & videos data. Some application of CNN are:- (a) Image Classification --> CNN, Transfer
Learning (b) Object Detection --> RCNN, FAST RCNN, FASTER RCNN, SSD, YOLO, DETECTRON (c)
Segmentation (d) Tracking (e) GAN >> Generative Adversarial Network (3) RNN --> Recurrent Neural
Network. RNN works whenever input data are text, time series or sequential data like Audio, Time series
et cetra. Some techniques are used in RNN:- RNN LSTM RNN Bidirectional LSTM RNN Encoder, Decoder
Transformers Bert GPT1, GPT2, GPT3 NLP - Natural Language Processing is the part of RNN. NLP --> Text
--> Vector Bag of words (BOW) TFIDF word2vec word embedding Why Deep Learning? Computers have
long had techniques for recognizing features inside of images. The results weren’t always great.
Computer vision has been a main beneficiary of deep learning. Computer vision using deep learning now
rivals humans on many image recognition tasks. Facebook has had great success with identifying faces in
photographs by using deep learning. It’s not just a marginal improvement, but a game changer: “Asked
whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53
percent of the time. New software developed by researchers at Facebook can score 97.25 percent on
the same challenge, regardless of variations in lighting or whether the person in the picture is directly
facing the camera.” Speech recognition is a another area that’s felt deep learning’s impact. Spoken
languages are so vast and ambiguous. Baidu – one of the leading search engines of China – has
developed a voice recognition system that is faster and more accurate than humans at producing text on
a mobile phone. In both English and Mandarin. What is particularly fascinating, is that generalizing the
two languages didn’t require much additional design effort: “Historically, people viewed Chinese and
English as two vastly different languages, and so there was a need to design very different features,”
Andrew Ng says, chief scientist at Baidu. “The learning algorithms are now so general that you can just
learn.” Google is now using deep learning to manage the energy at the company’s data centers. They’ve
cut their energy needs for cooling by 40%. That translates to about a 15% improvement in power usage
efficiency for the company and hundreds of millions of dollars in savings. To understand more about
ANN (Artificial Neural Network) Please visit Tensorflow Playground
(https://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=regpla
ne&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.41926&showTestDa
ta and play with the Neural Network. Installation of Tensorflow, Keras, pytorch & Basic of Google Colab
Tensorflow: If you are using google colab then you don't have to install tensorflow additionaly, Colab
already has tensorflow pre-installed you have to just import that. To import tensorflow just write import
tensorflow as tf But if you are using your local system like jupyter notebook then you have to install it.
To install that first of all create a virtual Environment then activate your Environment and write to your
anaconda promt pip install tensorflow it will install latest version of tensorflow for you. To check the
version of tensorflow tf.__version__ and for keras tf.keras.__version__ pytorch: For installing pytorch
please visit their website Pytorch (https://pytorch.org) Activate GPU on colab: To activate GPU on colab
go to the Runtime above and change the runtype type to GPU. To check your GPU just write !nvidia-smi
##Deep Learning Frameworks Deep learnings is made accessible by a number of open source projects.
Some of the most popular technologies include, but are not limited to, Deeplearning4j (DL4j), Theano,
Torch, TensorFlow, and Caffe. The deciding factors on which one to use are the tech stack they target,
and if they are low-level, academic, or application focused. Here’s an overview of each: DL4J: JVM-based
Distrubted Integrates with Hadoop and Spark Theano: Very popular in Academia Fairly low level
Interfaced with via Python and Numpy Torch: Lua based In house versions used by Facebook and Twitter
Contains pretrained models TensorFlow: Google written successor to Theano Interfaced with via Python
and Numpy Highly parallel Can be somewhat slow for certain problem sets Caffe: Not general purpose.
Focuses on machine-vision problems Implemented in C++ and is very fast Not easily extensible Has a
Python interface pip install tensorflow==2.0.0 To run from Anaconda Prompt !pip install
tensorflow==2.0.0 To run from Jupyter Notebook Both Tensorflow 2.0 and Keras have been released for
four years (Keras was released in March 2015, and Tensorflow was released in November of the same
year). The rapid development of deep learning in the past days, we also know some problems of
Tensorflow1.x and Keras: Using Tensorflow means programming static graphs, which is difficult and
inconvenient for programs that are familiar with imperative programming Tensorflow api is powerful
and flexible, but it is more complex, confusing and difficult to use. Keras api is productive and easy to
use, but lacks flexibility for research Official docs link for DETAILED INSTALLATION STEPS
(https://www.tensorflow.org/install) for Tensorflow 2 In [ ]: # Verify installation import tensorflow as tf
In [ ]: print(f"Tensorflow Version: {tf.__version__}") print(f"Keras Version: {tf.keras.__version__}")
Tensorflow2.0 is a combination design of Tensorflow1.x and Keras. Considering user feedback and
framework development over the past four years, it largely solves the above problems and will become
the future machine learning platform. Tensorflow 2.0 is built on the following core ideas: Tensorflow
Version: 2.5.0 Keras Version: 2.5.0 The coding is more pythonic, so that users can get the results
immediately like they are programming in numpy Retaining the characteristics of static graphs (for
performance, distributed, and production deployment), this makes TensorFlow fast, scalable, and ready
for production. Using Keras as a high-level API for deep learning, making Tensorflow easy to use and
efficient Make the entire framework both high-level features (easy to use, efficient, and not flexible) and
lowlevel features (powerful and scalable, not easy to use, but very flexible) Eager execution is by default
in TensorFlow 2.0 and, it needs no special setup. The following below code can be used to find out
whether a CPU or GPU is in use GPU/CPU Check In [ ]: tf.config.list_physical_devices('GPU') In [ ]:
tf.config.list_physical_devices('CPU') In [ ]: CheckList = ["GPU", "CPU"] for device in CheckList: out_ =
tf.config.list_physical_devices(device) if len(out_) > 0: print(f"{device} is available") print("details\
n",out_) else: print(f"{device} not available") In [ ]: Out[3]:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Out[4]:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')] GPU is available details
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] CPU is available details
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')] ANN: Artificial Neural Networks
ANNs are inspired by biological neurons found in cerebral cortex of our brain. The cerebral cortex (plural
cortices), also known as the cerebral mantle, is the outer layer of neural tissue of the cerebrum of the
brain in humans and other mammals. In the above diagram we can neurons of human brains, these
neurons resemble the ANN.

You might also like