0% found this document useful (0 votes)
35 views6 pages

Perceptrón Multicapa

This document describes a multi-layer perceptron neural network model with one hidden layer of 10 neurons and one output layer with a single neuron. The network is trained on the humps function dataset to learn the relationship between the input time values and the desired output values. The network is trained using both gradient descent backpropagation and Levenberg-Marquardt algorithms, and the outputs of the trained networks are compared.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views6 pages

Perceptrón Multicapa

This document describes a multi-layer perceptron neural network model with one hidden layer of 10 neurons and one output layer with a single neuron. The network is trained on the humps function dataset to learn the relationship between the input time values and the desired output values. The network is trained using both gradient descent backpropagation and Levenberg-Marquardt algorithms, and the outputs of the trained networks are compared.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

UNIVERSIDAD DE COLIMA

FACULTAD DE INGENIERÍA MECÁNICA Y ELÉCTRICA

Ingeniería Mecatrónica

Control inteligente

Perceptrón multicapa

8voI
Jesús Apolinar Chávez Trujillo
Ejercicio 1
Ejercicio 2
P = [0 0 1 1;
0 1 0 1];
T = [-1 1 1 -1];
Q = length(P);

n1 = 30; %Numero de neuronas en la capa oculta


ep = 1; % Ventana de valores iniciales
% Valores iniciales
W1 = ep*(2*rand(n1,2)-1);
b1 = ep*(2*rand(n1,1)-1);
W2 = ep*(2*rand(1,n1)-1);
b2 = ep*(2*rand-1);
alfa = 0.001;
for Epocas = 1:10000
sum = 0;
for q = 1:Q
% q = randi(Q);
% Propagación de la entrada hacia la salida
a1 = tansig(W1*P(:,q) + b1);
a2(q) = tansig(W2*a1 + b2);
% Retropropagación de la sensibilidades
e = T(q)-a2(q);
s2 = -2*(1-a2(q)^2)*e;
s1 = diag(1-a1.^2)*W2'*s2;
% Actualización de pesos sinapticos y polarizaciones
W2 = W2 - alfa*s2*a1';
b2 = b2 - alfa*s2;
W1 = W1 - alfa*s1*P(:,q)';
b1 = b1 - alfa*s1;
% Sumando el error cuadratico
sum = e^2 + sum;
end
% Error cuadratico medio
emedio(Epocas) = sum/Q;
end
figure, subplot(1,2,1), plot(emedio)

% Verificación de la respuesta de la multicapa


for q = 1:Q
a(q) = tansig(W2*tansig(W1*P(:,q) + b1)+ b2);
end
a

% Frontera de decisión
u = linspace(-2, 2, 100);
v = linspace(-2, 2, 100);
for i = 1:length(u)
for j = 1:length(v)
z(i,j) = tansig(W2*tansig(W1*[u(i); v(j)] + b1) + b2);
end
end
subplot(1,2,2), hold on, contour(u, v, z',[-0.9, 0, 0.9],'LineWidth', 2) %Por que debo
transponer a z
axis([-0.5 1.5 -0.5 1.5]), plot(P(1,[1,4]),P(2,[1,4]),'ro', P(1,[2,3]),P(2,[2,3]),'bo')
Ejercicio 3
clear all, clc
t = 0:0.1:20; %vector del tiempo
y=5*besselj(1,t) %funcion f(t)
P=t; %entrada red neuronal
T=y; %salida deseada de la red
plot(P,T)
grid; xlabel('time (s)'); ylabel('output'); title('humps function')
hold;

net1=newff([0 20], [10,1], {'tansig','purelin'},'traingd');


% The first argument [0 2] defines the range of the input and initializes
%the network parameters.
% The second argument the structure of the network. There are two layers.
% 5 is the number of the neurons in the first hidden layer,
% 1 is the number of nodes in the output layer,
% Next the activation functions in the layers are defined.
% In the first hidden layer there are 5 tansig functions.
% In the output layer there is 1 linear function.
% 'learngd' defines the basic learning scheme - gradient method

net1.trainParam.show = 50; % The result is shown at every 50th iteration (epoch)


net1.trainParam.lr = 0.05; % Learning rate used in some gradient schemes
net1.trainParam.epochs =1000; % Max number of iterations
net1.trainParam.goal = 1e-3; % Error tolerance; stopping criterion

net1 = train(net1, P, T); % Iterates gradient type of loop


a1= sim(net1,P); %salida de la red entrenada con el BP

net2=newff([0 20], [10,1], {'tansig','purelin'},'trainlm');


% 'learnlm' defines the Levenberg-Marquard method

net2.trainParam.show = 50; % The result is shown at every 50th iteration (epoch)


net2.trainParam.lr = 0.005; % Learning rate used in some gradient schemes
net2.trainParam.epochs =1000; % Max number of iterations
net2.trainParam.goal = 1e-3; % Error tolerance; stopping criterion

net2 = train(net2, P, T); % Iterates gradient type of loop


a2= sim(net2,P); %salida de la red entrenada con el LM

% Plot results and compare


plot(P,a1,'r',P,a2,'g--o')

Cuenta con 1 capa oculta de 10 neuronas y 1 capa de salida con una sola neurona.

You might also like