加载中…
个人资料
  • 博客等级:
  • 博客积分:
  • 博客访问:
  • 关注人气:
  • 获赠金笔:0支
  • 赠出金笔:0支
  • 荣誉徽章:
正文 字体大小:

matlab新建BP网络函数newff详解

(2013-05-07 15:57:39)
标签:

matlab

bp神经网络

newff

it

分类: matlab

算法中用到了这个函数,参数比较多有4个

-----------------------------------代码开始---------------------------------

 
%本程序用BP神经网络实现对十六进制数字的识别
clear all
nntwarn on;
disp('------------------------i use BP neural network to identify Hex number');
disp('------------------------first step: digital numbers and initialize BP network');
disp('------------------------press any key to digitalize numbers');
pause

%将每个数字进行数字化处理,similiar to 7-segment led, every segment use 5 dots
% in every segment, 1 stands for dot, 0 stands for empty
number0 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,0 1 1 1 0];
number1 = [0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0];
number2 = [0 1 1 1 0,1 0 0 0 1,0 0 0 0 1,0 0 0 1 0,0 0 1 0 0,0 1 0 0 0,1 1 1 1 1];
number3 = [0 1 1 1 0,1 0 0 0 1,0 0 0 0 1,0 0 1 1 0,0 0 0 0 1,1 0 0 0 1,0 1 1 1 0];
number4 = [1 0 0 1 0,1 0 0 1 0,1 0 0 1 0,1 0 0 1 0,1 1 1 1 1,0 0 0 1 0,0 0 0 1 0];
number5 = [1 1 1 1 1,1 0 0 0 0,1 0 0 0 0,1 1 1 1 0,0 0 0 0 1,1 0 0 0 1,0 1 1 1 0];
number6 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 0,1 1 1 1 0,1 0 0 0 1,1 0 0 0 1,0 1 1 1 0];
number7 = [1 1 1 1 1,0 0 0 0 1,0 0 0 1 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0,0 0 1 0 0];
number8 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,0 1 1 1 0];
number9 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 1,0 1 1 1 1,0 0 0 0 1,1 0 0 0 1,0 1 1 1 0];
number10 = [0 0 1 0 0,0 1 0 1 0,0 1 0 1 0,1 0 0 0 1,1 1 1 1 1,1 0 0 0 1,1 0 0 0 1];
number11 = [1 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 1 1 1 0];
number12 = [0 1 1 1 0,1 0 0 0 1,1 0 0 0 0,1 0 0 0 0,1 0 0 0 0,1 0 0 0 1,0 1 1 1 0];
number13 = [1 1 1 1 0,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 0 0 0 1,1 1 1 1 0];
number14 = [1 1 1 1 1,1 0 0 0 0,1 0 0 0 0,1 1 1 1 0,1 0 0 0 0,1 0 0 0 0,1 1 1 1 1];
number15 = [1 1 1 1 1,1 0 0 0 0,1 0 0 0 0,1 1 1 1 0,1 0 0 0 0,1 0 0 0 0,1 0 0 0 0];
number = [number0;number1;number2;number3;number4;number5;number6;number7;...
          number8;number9;number10;number11;number12;number13;number14;number15]';
disp('------------------------digitalization finished !( with 5×7 matrix)。');

%建立BP神经网络并初始化
targets = eye(16);% targets is a diagonal matrix
P=number;   %数字输入矩阵,0-f
Q=P;
T=targets;  %目标矢量,
S1=10;      %隐含层个数,the number of cells in hidden layer?
[R,Q]=size(number); %R=35,Q=16
[S2,Q]=size(targets); % S2=16,Q=16

%create BP networks,
net=newff(minmax(P),[S1,S2],{'logsig','logsig'},'traingdx');
net.LW{2,1}=net.LW{2,1}*0.01;
net.b{2}=net.b{2}*0.01;
disp(' ------------------------established BP network, ready to train network');
disp(' ------------------------network training classified as noise and without noise');
disp(' ------------------------press any key to begin train network');
pause

%进行无噪声训练
P=number;
T=targets;
net.performFcn='sse';
net.trainParam.goal=0.01;
net.trainParam.show=10;
net.trainParam.epochs=5000;
net.trainParam.mc=0.95;
[net,tr]=train(net,P,T);

%进行有噪声训练
netn=net;
net.trainParam.goal=0.06;
net.trainParam.epochs=600;
T=[targets targets targets targets];
%重复训练10次
for pass=1:10
      P=[number,number,...
      (number+randn(R,Q)*0.1),...
      (number+randn(R,Q)*0.2)];
      [netn,tr]=train(netn,P,T);
end

%再次进行无噪声训练
P=number;
T=targets;
net.performFcn='sse';
net.trainParam.goal=0.01;
net.trainParam.show=10;
net.trainParam.epochs=500;
net.trainParam.mc=0.95;
[net,tr]=train(net,P,T);

disp(' ------------------------network training finished, next we will test its fault tolerance');
disp(' ------------------------test result will come out soon, wait..................');

%测试网络的容错性
noise_range=0:0.05:0.5;
max_test=100;
T=targets;
for i=1:11
      noise_level(i)=noise_range(i);
      errors1(i)=0;
      errors2(i)=0;
      for j=1:max_test
          P=number+randn(35,16)*noise_level(i);
          %测试未经误差训练的网络
          A=sim(net,P);
          AA=compet(A);
          errors1(i)=errors1(i)+sum(sum(abs(AA-T)))/2;
          %测试经过误差训练的网络
          An=sim(netn,P);
          AAn=compet(An);
          errors2(i)=errors2(i)+sum(sum(abs(AAn-T)))/2;
      end;
end;

figure
plot(noise_range,errors1*100,'r--',noise_range,errors2*100);
title('不同训练情况下的网络识别误差');
xlabel('噪声指标');
ylabel('无噪声训练网络 -    有噪声训练网络---');

disp(' ------------------------FAULT TOLERANCE TEST FINISHED ');
disp(' ------------------------PRESS ANY KEY TO BEGIN REAL TEST');
pause

%对实际含噪声的数字进行识别
for index=8
      noisyJ=number(:,index)+randn(35,1)*0.2;
      figure;
      plotchar(noisyJ);
      A2=sim(net,noisyJ);
      A2=compet(A2);
      answer=find(compet(A2)==1);
      figure;
      plotchar(number(:,answer));
end;
disp(' ------------------------PROGRAM EXIT');

 

-----------------------------------代码结束---------------------------------

 

 

一一解释如下:

第一个参数PR表示R x 2 matrix of min and max values for R input elements,意思是返回二维数组,二维数组的每一行是输入的每一行的元素的最大值和最小值,用minmax(P)代替,其中minmax函数是将P矩阵中的每一行的最小值和最大值提取出来,然后从小到大排列,称为一个行数和P一致,但是列数是2的矩阵。

第二个参数[S1 S2...SNl]用[S1,S2],表示S1层的size大小为10,S2层的size大小为16,为什么呢?S1为直接赋值表示隐含层的神经元数目为10,S2为目标矢量的大小(因为目标矢量是eye生成的16维单位矩阵)

第三个参数{TF1 TF2...TFNl}表示第i层的变换函数用{'logsig','logsig'}代替,logsig变换函数图形如下:

http://s7/mw690/500bd63c4dc66301f8336&690,logsigmoid transfer function calculate layer output from net input。

第四个参数BTF,表示的bp神经网络的训练函数,用'traingdx'来代替。意思是动量和适应性学习速率反向传播梯度衰减,Gradient descent with momentum and adaptive learning rate backpropagation,

traingdx is a network training function that updates weight and bias values according to gradient descent momentum and an adaptive learning rate.

第五个参数BLF,Backpropagation weight/bias learning function (default = 'learngdm'),权值的学习函数,代码中没有学习过程,忽略。

第六个参数PF,Performance function (default = 'mse'),忽略。

这样的话,用四个参数即可建立起一个BP神经网络,之后对其进行训练即可,没有自学习过程。

0

阅读 收藏 喜欢 打印举报/Report
  

新浪BLOG意见反馈留言板 欢迎批评指正

新浪简介 | About Sina | 广告服务 | 联系我们 | 招聘信息 | 网站律师 | SINA English | 产品答疑

新浪公司 版权所有