LeNet Architecture
LeNet Architecture
medium.com/@siddheshb008/lenet-5-architecture-explained-3b559cb2d52b
What is LeNet 5?
LeNet is a convolutional neural network that Yann LeCun introduced in 1989. LeNet is a
common term for LeNet-5, a simple convolutional neural network.
The LeNet-5 signifies CNN’s emergence and outlines its core components. However, it
was not popular at the time due to a lack of hardware, especially GPU (Graphics Process
Unit, a specialised electronic circuit designed to change memory to accelerate the
creation of images during a buffer intended for output to a show device) and alternative
algorithms, like SVM, which could perform effects similar to or even better than those of
the LeNet.
Features of LeNet-5
Every convolutional layer includes three parts: convolution, pooling, and nonlinear
activation functions
Using convolution to extract spatial features (Convolution was called receptive fields
originally)
is used for subsampling.
‘’ is used as the activation function
Usingor as the last classifier
The sparse connection between layers reduces the complexity of computation
Architecture
The LeNet-5 CNN architecture has seven layers. Three convolutional layers, two
subsampling layers, and two fully linked layers make up the layer composition.
LeNet-5 Architecture
First Layer
1/6
A 32x32 grayscale image serves as the input for LeNet-5 and is processed by the first
convolutional layer comprising six feature maps or filters with a stride of one. From
32x32x1 to 28x28x6, the image’s dimensions shift.
First Layer
Second Layer
Then, using a filter size of 22 and a stride of 2, the LeNet-5 adds an average pooling layer
or sub-sampling layer. 14x14x6 will be the final image’s reduced size.
Second Layer
Third Layer
2/6
A second convolutional layer with 16 feature maps of size 55 and a stride of 1 is then
present. Only 10 of the 16 feature maps in this layer are linked to the six feature maps in
the layer below, as can be seen in the illustration below.
The primary goal is to disrupt the network’s symmetry while maintaining a manageable
number of connections. Because of this, there are 1516 training parameters instead of
2400 in these layers, and similarly, there are 151600 connections instead of 240000.
Third Layer
Fourth Layer
With a filter size of 22 and a stride of 2, the fourth layer (S4) is once more an average
pooling layer. The output will be decreased to 5x5x16 because this layer is identical to the
second layer (S2) but has 16 feature maps.
3/6
Fourth Layer
Fifth Layer
With 120 feature maps, each measuring 1 x 1, the fifth layer (C5) is a fully connected
convolutional layer. All 400 nodes (5x5x16) in layer four, S4, are connected to each of the
120 units in C5’s 120 units.
Fifth Layer
Sixth Layer
4/6
A fully connected layer (F6) with 84 units makes up the sixth layer.
Sixth Layer
Output Layer
The SoftMax output layer, which has 10 potential values and corresponds to the digits 0
to 9, is the last layer.
Output Layer
5/6
Summary of LeNet-5 Architecture
6/6