3 1 4 Parameters of RBF Neural Network

3.1.4. Parameters of RBF Neural Network p38 MAPK phosphorylation In the classical RBF neural network, there are three parameters that can be adjusted: centers and its width of the hidden layer’s basis function and the connection weights between hidden layer and output layer. Construction of the classical RBF neural network generally adopts the following rules. (1) Basis Function Centers. By selecting basis function centers according to experience, if the distribution of training sample can represent the problem, in other words, we can select the s centers according to the experience; the spacing is d; the width of the selected Gaussian function is σ=d2s. (6) (2) Basis

Function. We use K-mean cluster method to select the basis function; the center of each cluster is regarded as the center of basis functions. As the output is linear unit, its weights can be calculated directly by LMS method. We use iterative formula (7) to modify the training error, so we can get the following optimal neural network algorithm: e=∑k=1n(tk−yk)2. (7) Here, e is the error faction, tk is the actual value, and yk is the output of neural network. 3.2. The

Basis Steps of GA-RBF Algorithm The GA-RBF neural network algorithm basis step is descried as follows. Step 1 . — Set the RBF neural network, according to the maximum number of neurons in the hidden layers; use K-clustering algorithm to obtain the center of basis function; use formula (6) to calculate the width of the center.

Step 2 . — Set the parameters of the GA, the population size, the crossover rate, mutation rate, selection mechanism, crossover operator and mutation operator, the objective function error, and the maximum number of iterations. Step 3 . — Initialize populations P randomly; its size is N (the number of RBF neural network is N); the corresponding network to each individual is encoded by formula (4). Step 4 . — Use the training sample to train the initial constructed RBF neural network, whose amount is N; use formula (7) to calculate the network’s output error E. Step 5 . — According to the training error E and the number of hidden layer Dacomitinib neurons s, use formula (5) to calculate the corresponding chromosome fitness to each network. Step 6 . — According the fitness value, sort the chromosome; select the best fitness of the population, denoted by Fb; verify E < Emin or G ≥ Gmax ; if yes, turn to Step 9; otherwise turn to Step 7. Step 7 . — Select several best individuals to be reserved to the next generation NewP directly. Step 8 . — Select a pair of chromosomes for single-point crossover, to generate two new individuals as members of next generation; repeat this procedure, until the new generation reaches the maximum size of population Ps; at this time, the coding will be done separately. Step 9 .

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>