Neural Information Processing: Research and DevelopmentJagath Chandana Rajapakse, Lipo Wang The field of neural information processing has two main objects: investigation into the functioning of biological neural networks and use of artificial neural networks to sol ve real world problems. Even before the reincarnation of the field of artificial neural networks in mid nineteen eighties, researchers have attempted to explore the engineering of human brain function. After the reincarnation, we have seen an emergence of a large number of neural network models and their successful applications to solve real world problems. This volume presents a collection of recent research and developments in the field of neural information processing. The book is organized in three Parts, i.e., (1) architectures, (2) learning algorithms, and (3) applications. Artificial neural networks consist of simple processing elements called neurons, which are connected by weights. The number of neurons and how they are connected to each other defines the architecture of a particular neural network. Part 1 of the book has nine chapters, demonstrating some of recent neural network architectures derived either to mimic aspects of human brain function or applied in some real world problems. Muresan provides a simple neural network model, based on spiking neurons that make use of shunting inhibition, which is capable of resisting small scale changes of stimulus. Hoshino and Zheng simulate a neural network of the auditory cortex to investigate neural basis for encoding and perception of vowel sounds. |
Contents
I | 1 |
II | 19 |
III | 39 |
IV | 56 |
V | 77 |
VI | 94 |
VII | 113 |
VIII | 128 |
XIV | 256 |
XV | 278 |
XVI | 294 |
XVII | 320 |
XVIII | 334 |
XIX | 351 |
XX | 370 |
XXI | 387 |
Other editions - View all
Neural Information Processing: Research and Development Jagath Chandana Rajapakse,Lipo Wang Limited preview - 2012 |
Neural Information Processing: Research and Development Jagath Chandana Rajapakse,Lipo Wang No preview available - 2012 |
Neural Information Processing: Research and Development Jagath Chandana Rajapakse,Lipo Wang No preview available - 2004 |
Common terms and phrases
action-value activity adaptive agents AND-type approach architecture artificial neural networks associative memory average bigram binary BNNs brain Class entropy classifiers codebook vectors combination complexity Computer connection connectionism connectionist convergence cortex damaged rate data set detection dynamic cell assembly EC neurons edge ensemble error estimation example F-measure feature function gene genetic algorithm given hidden layer hidden neurons hidden units hierarchical hyperplane IEEE Information Processing input KEBCA learning algorithm learning rate lim sup linear Machine Learning mean square error method modules neural network neuroid elements neurons nodes nonlinear object optimal output parameters pattern Pdam performance problem Proc proposed quasi-ARX receptive field recognition recurrent neural network reinforcement learning representation represented selected self-organizing self-organizing map sequence shape memory alloy signal simulation spike structure synapses task technique temporal threshold tion training data tree true vertices visual vowel sounds weights
Popular passages
Page 476 - I -2005-000- 1 0989-0 from the Basic Research Program of the Korea Science & Engineering Foundation References [1] Yaszemski MJ, Payne RG, Hayes WC, Langer R, and Mikos AG., Biomaterials.