## Neural Network Learning and Expert Systems"Most neural network programs for personal computers simply control a set of fixed, canned network-layer algorithms with pulldown menus. This new tutorial offers hands-on neural network experiments with a different approach. A simple matrix language lets users create their own neural networks and combine networks, and this is the only currently available software permitting combined simulation of neural networks together with other dynamic systems such as robots or physiological models. The enclosed student version of DESIRE/NEUNET differs from the full system only in the size of its data area and includes a screen editor, compiler, color graphics, help screens, and ready-to-run examples. Users can also add their own help screens and interactive menus. The book provides an introduction to neural networks and simulation, a tutorial on the software, and many complete programs including several backpropagation schemes, creeping random search, competitive learning with and without adaptive-resonance function and "conscience," counterpropagation, nonlinear Grossberg-type neurons, Hopfield-type and bidirectional associative memories, predictors, function learning, biological clocks, system identification, and more. In addition, the book introduces a simple, integrated environment for programming, displays, and report preparation. Even differential equations are entered in ordinary mathematical notation. Users need not learn C or LISP to program nonlinear neuron models. To permit truly interactive experiments, the extra-fast compilation is unnoticeable, and simulations execute faster than PC FORTRAN. The nearly 90 illustrations include block diagrams, computer programs, and simulation-output graphs." |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Other editions - View all

### Common terms and phrases

activations algorithm with ratchet approximation ART1 autoassociators backpropagation backward chaining bias Boltzmann machines Boolean functions BRD algorithm cell computes change step chapter cluster computational learning theory connectionist expert system connectionist models consider construction convergence correct output correctly classifies corresponding decision trees define distributed cells distributed method DNF expressions equation example Ek false fault detection problem feature space feature vectors gives gradient descent Hopfield If-Then rules inference inferencing input cells integer intermediate cells Internist-I knowledge base layer learning problems machine learning MACIE MACIE's MLP's multilayer perceptron network expert system neural network expert nodes noise noisy nonseparable NP-complete number of inputs number of iterations number of training optimal output cells placibin pocket algorithm polynomial Posiboost probability produce prototype random represent set of training set of weights single-cell model subset supervised learning theorem tion top cell training data training examples true values variables weight vector weighted sum winner-take-all group

### Popular passages

Page 24 - When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.

Page 256 - Expert systems are a class of computer programs that can advise, analyze, categorize, communicate, consult, design, diagnose, explain, explore, forecast, form concepts, identify, interpret, justify, learn, manage, monitor, plan, present, retrieve, schedule, test, and tutor. They address problems normally thought to require human specialists for their solution.

Page 77 - The bad news is that it is hard to know just how long the descriptive material will be correct. Despite the fact that there were major changes in the tax system in 1986, 1990, and 1993, important modifications are under consideration, and it is likely that a number of "reforms

Page 356 - POGGIO T AND GIROSI F, Networks for approximation and learning, Proceedings of the IEEE, vol 78, 1990, pp1481-1497.

Page 273 - An alternative way to think of dependency networks is from the point of view of connections not present If u< is not connected to «,, then it means M, can always be computed without directly consideration of «<, even though «, might affect other variables that we do need to consider for...

Page 3 - The new activation is then passed along those connections leading to other units. Each connection has a signed number called a weight that determines whether an activation that travels along it influences the receiving cell to produce a similar or a different activation according to the sign (+ or — ) of the weight. The size of the weight determines the magnitude of the influence of a sending cell's activation upon the receiving cell; thus a large positive or negative weight gives the sender's...

Page 80 - Ek, ie, {WE">OandCk = +l}or {WE" <OandC* = -1} Then: 2a. If the current run of correct classifications with W is longer than the run of correct classifications for the weight vector W°cket in the pocket: 2aa.

Page 12 - One of the exercises at the end of the chapter is to extend this program to accept and display answers indicating whether they are correct.

Page 76 - Then: 2a. If the current run of correct classifications with W is longer than the run of correct classifications for the weight vector W°cket in the pocket: 2aa. Replace the pocket weights W°cke' by W, and remember the length of its correct run.

Page 299 - The reason for duplicating examples is that the pocket algorithm that we use to generate the knowledge base seeks to minimize the probability that a randomly selected training example will be misclassified. By duplicating training examples, we effectively adjust the frequency of selection of each particular training example so that the learning program will solve the problem at hand. (Of course the examples do not have to be physically...