matlab新建BP网络函数newff详解

标签:
matlabbp神经网络newffit |
分类: matlab |
算法中用到了这个函数,参数比较多有4个
-----------------------------------代码开始---------------------------------
%本程序用BP神经网络实现对十六进制数字的识别
clear all
nntwarn on;
disp('------------------------i use BP neural network to identify
Hex number');
disp('------------------------first step: digital numbers and
initialize BP network');
disp('------------------------press any key to digitalize
numbers');
pause
%将每个数字进行数字化处理,similiar to 7-segment led, every segment use 5
dots
% in every segment, 1 stands for dot, 0 stands for empty
number0 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0
0 1,0 1 1 1 0];
number1 = [0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1
0 0,0 0 1 0 0];
number2 = [0 1 1 1 0,1 0 0 0 1,0 0 0 0 1,0 0 0 1 0,0 0 1 0 0,0 1 0
0 0,1 1 1 1 1];
number3 = [0 1 1 1 0,1 0 0 0 1,0 0 0 0 1,0 0 1 1 0,0 0 0 0 1,1 0 0
0 1,0 1 1 1 0];
number4 = [1 0 0 1 0,1 0 0 1 0,1 0 0 1 0,1 0 0 1 0,1 1 1 1 1,0 0 0
1 0,0 0 0 1 0];
number5 = [1 1 1 1 1,1 0 0 0 0,1 0 0 0 0,1 1 1 1 0,0 0 0 0 1,1 0 0
0 1,0 1 1 1 0];
number6 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 0,1 1 1 1 0,1 0 0 0 1,1 0 0
0 1,0 1 1 1 0];
number7 = [1 1 1 1 1,0 0 0 0 1,0 0 0 1 0,0 0 1 0 0,0 0 1 0 0,0 0 1
0 0,0 0 1 0 0];
number8 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,0 1 1 1 0,1 0 0 0 1,1 0 0
0 1,0 1 1 1 0];
number9 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,0 1 1 1 1,0 0 0 0 1,1 0 0
0 1,0 1 1 1 0];
number10 = [0 0 1 0 0,0 1 0 1 0,0 1 0 1 0,1 0 0 0 1,1 1 1 1 1,1 0 0
0 1,1 0 0 0 1];
number11 = [1 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 1 1 1 0,1 0 0 0 1,1 0 0
0 1,1 1 1 1 0];
number12 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 0,1 0 0 0 0,1 0 0 0 0,1 0 0
0 1,0 1 1 1 0];
number13 = [1 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0
0 1,1 1 1 1 0];
number14 = [1 1 1 1 1,1 0 0 0 0,1 0 0 0 0,1 1 1 1 0,1 0 0 0 0,1 0 0
0 0,1 1 1 1 1];
number15 = [1 1 1 1 1,1 0 0 0 0,1 0 0 0 0,1 1 1 1 0,1 0 0 0 0,1 0 0
0 0,1 0 0 0 0];
number =
[number0;number1;number2;number3;number4;number5;number6;number7;...
disp('------------------------digitalization finished !( with 5×7
matrix)。');
%建立BP神经网络并初始化
targets = eye(16);% targets is a diagonal matrix
P=number;
Q=P;
T=targets;
S1=10;
[R,Q]=size(number); %R=35,Q=16
[S2,Q]=size(targets); % S2=16,Q=16
%create BP networks,
net=newff(minmax(P),[S1,S2],{'logsig','logsig'},'traingdx');
net.LW{2,1}=net.LW{2,1}*0.01;
net.b{2}=net.b{2}*0.01;
disp(' ------------------------established BP network, ready to
train network');
disp(' ------------------------network training classified as noise
and without noise');
disp(' ------------------------press any key to begin train
network');
pause
%进行无噪声训练
P=number;
T=targets;
net.performFcn='sse';
net.trainParam.goal=0.01;
net.trainParam.show=10;
net.trainParam.epochs=5000;
net.trainParam.mc=0.95;
[net,tr]=train(net,P,T);
%进行有噪声训练
netn=net;
net.trainParam.goal=0.06;
net.trainParam.epochs=600;
T=[targets targets targets targets];
%重复训练10次
for pass=1:10
end
%再次进行无噪声训练
P=number;
T=targets;
net.performFcn='sse';
net.trainParam.goal=0.01;
net.trainParam.show=10;
net.trainParam.epochs=500;
net.trainParam.mc=0.95;
[net,tr]=train(net,P,T);
disp(' ------------------------network training finished, next
we will test its fault tolerance');
disp(' ------------------------test result will come out soon,
wait..................');
%测试网络的容错性
noise_range=0:0.05:0.5;
max_test=100;
T=targets;
for i=1:11
end;
figure
plot(noise_range,errors1*100,'r--',noise_range,errors2*100);
title('不同训练情况下的网络识别误差');
xlabel('噪声指标');
ylabel('无噪声训练网络
-
disp(' ------------------------FAULT TOLERANCE TEST FINISHED
');
disp(' ------------------------PRESS ANY KEY TO BEGIN REAL
TEST');
pause
%对实际含噪声的数字进行识别
for index=8
end;
disp(' ------------------------PROGRAM EXIT');
-----------------------------------代码结束---------------------------------
一一解释如下:
第一个参数PR表示R x 2 matrix of min and max values for R input elements,意思是返回二维数组,二维数组的每一行是输入的每一行的元素的最大值和最小值,用minmax(P)代替,其中minmax函数是将P矩阵中的每一行的最小值和最大值提取出来,然后从小到大排列,称为一个行数和P一致,但是列数是2的矩阵。
第二个参数[S1 S2...SNl]用[S1,S2],表示S1层的size大小为10,S2层的size大小为16,为什么呢?S1为直接赋值表示隐含层的神经元数目为10,S2为目标矢量的大小(因为目标矢量是eye生成的16维单位矩阵)
第三个参数{TF1 TF2...TFNl}表示第i层的变换函数用{'logsig','logsig'}代替,logsig变换函数图形如下:
http://s7/mw690/500bd63c4dc66301f8336&690,logsigmoid transfer
function calculate layer
第四个参数BTF,表示的bp神经网络的训练函数,用'traingdx'来代替。意思是动量和适应性学习速率反向传播梯度衰减,Gradient descent with momentum and adaptive learning rate backpropagation,
traingdx is a network training function that updates weight and bias values according to gradient descent momentum and an adaptive learning rate.
第五个参数BLF,Backpropagation weight/bias learning function (default = 'learngdm'),权值的学习函数,代码中没有学习过程,忽略。
第六个参数PF,Performance function (default = 'mse'),忽略。
这样的话,用四个参数即可建立起一个BP神经网络,之后对其进行训练即可,没有自学习过程。