加载中…
个人资料
  • 博客等级:
  • 博客积分:
  • 博客访问:
  • 关注人气:
  • 获赠金笔:0支
  • 赠出金笔:0支
  • 荣誉徽章:
正文 字体大小:

trainlm函数

(2011-06-17 16:06:52)
标签:

杂谈

分类: fangzhen

>> help trainlm
 TRAINLM Levenberg-Marquardt backpropagation.
 
   Syntax
  
     [net,tr] = trainlm(net,tr,trainV,valV,testV)
     info = trainlm('info')
 
   Description
 
     TRAINLM is a network training function that updates weight and
     bias states according to Levenberg-Marquardt optimization.
 
     TRAINLM is often the fastest backpropagation algorithm in the toolbox,
     and is highly recommended as a first choice supervised algorithm,
     although it does require more memory than other algorithms.
 
     TRAINLM(NET,TR,TRAINV,VALV,TESTV) takes these inputs,
       NET - Neural network.
       TR  - Initial training record created by TRAIN.
       TRAINV - Training data created by TRAIN.
       VALV - Validation data created by TRAIN.
       TESTV - Test data created by TRAIN.
     and returns,
       NET - Trained network.
       TR  - Training record of various values over each epoch.
 
     Each argument TRAINV, VALV and TESTV is a structure of these fields:
       - NxTS cell array of inputs for N inputs and TS timesteps.
            X{i,ts} is an RixQ matrix for ith input and ts timestep.
       Xi - NxNid cell array of input delay states for N inputs and Nid delays.
            Xi{i,j} is an RixQ matrix for ith input and jth state.
       Pd - NxSxNid cell array of delayed input states.
       - NoxTS cell array of targets for No outputs and TS timesteps.
            T{i,ts} is an SixQ matrix for the ith output and ts timestep.
       Tl - NlxTS cell array of targets for Nl layers and TS timesteps.
            Tl{i,ts} is an SixQ matrix for the ith layer and ts timestep.
       Ai - NlxTS cell array of layer delays states for Nl layers, TS timesteps.
            Ai{i,j} is an SixQ matrix of delayed outputs for layer i, delay j.
 
     Training occurs according to training parameters, with default values:
       net.trainParam.show        25  Epochs between displays
       net.trainParam.showCommandLine 0 generate command line output
       net.trainParam.showWindow   1 show training GUI
       net.trainParam.epochs     100  Maximum number of epochs to train
       net.trainParam.goal         Performance goal
       net.trainParam.max_fail     Maximum validation failures
       net.trainParam.mem_reduc    Factor to use for memory/speed trade off.
       net.trainParam.min_grad 1e-10  Minimum performance gradient
       net.trainParam.mu       0.001  Initial Mu
       net.trainParam.mu_dec     0.1  Mu decrease factor
       net.trainParam.mu_inc      10  Mu increase factor
       net.trainParam.mu_max    1e10  Maximum Mu
       net.trainParam.time       inf  Maximum time to train in seconds
 
 
     TRAINLM is the default training function for several network creation
     functions including NEWFF, NEWCF, NEWTD, NEWDTDNN and NEWNARX.
 
     TRAINLM('info') returns useful information about this function.
 
   Algorithm
 
     TRAINLM supports training with validation and test vectors if the
     network's NET.divideFcn property is set to a data division function.
     Validation vectors are used to stop training early if the network
     performance on the validation vectors fails to improve or remains
     the same for MAX_FAIL epochs in a row.  Test vectors are used as
     a further check that the network is generalizing well, but do not
     have any effect on training.
 
     TRAINLM can train any network as long as its weight, net input,
     and transfer functions have derivative functions.
 
     Backpropagation is used to calculate the Jacobian jX of performance
     PERF with respect to the weight and bias variables X.  Each
     variable is adjusted according to Levenberg-Marquardt,
 
       jj = jX * jX
       je = jX * E
       dX = -(jj+I*mu) \ je
 
     where E is all errors and I is the identity matrix.
 
     The adaptive value MU is increased by MU_INC until the change above
     results in a reduced performance value.  The change is then made to
     the network and mu is decreased by MU_DEC.
 
     The parameter MEM_REDUC indicates how to use memory and speed to
     calculate the Jacobian jX.  If MEM_REDUC is 1, then TRAINLM runs
     the fastest, but can require a lot of memory. Increasing MEM_REDUC
     to 2, cuts some of the memory required by a factor of two, but
     slows TRAINLM somewhat.  Higher states continue to decrease the
     amount of memory needed and increase training times.
 
     Training stops when any of these conditions occurs:
     1) The maximum number of EPOCHS (repetitions) is reached.
     2) The maximum amount of TIME has been exceeded.
     3) Performance has been minimized to the GOAL.
     4) The performance gradient falls below MINGRAD.
     5) MU exceeds MU_MAX.
     6) Validation performance has increased more than MAX_FAIL times
        since the last time it decreased (when using validation).
 
   See also template_train, newff, newcf, NEWTD, newdtdnn and newnarx.

    Reference page in Help browser
       doc trainlm

0

阅读 收藏 喜欢 打印举报/Report
  

新浪BLOG意见反馈留言板 欢迎批评指正

新浪简介 | About Sina | 广告服务 | 联系我们 | 招聘信息 | 网站律师 | SINA English | 产品答疑

新浪公司 版权所有