Circuits of the MindIn this groundbreaking work, computer scientist Leslie G. Valiant details a promising new computational approach to studying the intricate workings of the human brain. Focusing on the brain's enigmatic ability to quickly access a massive store of accumulated information during reasoning processes, the author asks how such feats are possible given the extreme constraints imposed by the brain's finite number of neurons, their limited speed of communication, and their restricted interconnectivity. Valiant proposes a "neuroidal model" that serves as a vehicle to explore these fascinating questions. While embracing the now classical theories of McCulloch and Pitts, the neuroidal model also accommodates state information in the neurons, more flexible timing mechanisms, a variety of assumptions about interconnectivity, and the possibility that different areas perform different functions. Programmable so that a wide range of algorithmic theories can be described and evaluated, the model provides a concrete computational language and a unified framework in which diverse cognitive phenomena--such as memory, learning, and reasoning--can be systematically and concurrently analyzed. Requiring no specialized knowledge, Circuits of the Mind masterfully offers an exciting new approach to brain science for students and researchers in computer science, neurobiology, neuroscience, artificial intelligence, and cognitive science. |
Contents
The Approach | 1 |
Biological Constraints | 9 |
Computational Laws | 27 |
Copyright | |
16 other sections not shown
Other editions - View all
Common terms and phrases
A₁ Algorithm 7.2 allocated areas assume attributes axonal branching basic behavior Boolean Boolean algebra Boolean functions brain caused to fire cell Chapter circuit cognitive column commonsense reasoning complex computational computational learning theory conjunction connections consider corresponding cortex defined dendritic tree directed graph disjunction edges expected number expression fixed node fraction frontier Hence human hypothesis implemented inductive learning input interactions knowledge representation L-expressions large number mechanisms mode negative examples neighbors neocortex neural neuroidal model neurons nodes fire number of nodes number of synapses objects pac learning pair parameters perceptron peripherals population coding positive example predicates preprogrammed presented probability problem prompted pyramidal cells random access tasks random graph relation relay nodes scene sequence simple reflex spikes subgraph sufficient supervised supervised learning target nodes theory threshold function tions unsupervised memorization update variables vicinal algorithms weights