How fast is the brain

Computer modeled on the brain?

How the human brain and the way it works can be mimicked
by Karlheinz Meier

Around a thousand billion nerve cells are connected in a small space in our brain to form a network that some scientists consider the most complex structure in the universe. So far, little is known about how the cells of the brain process information, for example how they learn how to remember or how they organize themselves into neural ensembles with special tasks. Our brains have a few things in common with today's most powerful computers - but the differences are greater, making it difficult for scientists to understand the universe in their heads - and to copy the way it works.

When Sissa ibn Dahir presented the game of chess to his ruler, the prince was so impressed by his subject's invention that he wanted to grant him a wish. What Sissa then asked for seemed modest: He wanted a grain of wheat on the first square of the chess game and then a doubling of the gift up to square 64. But his wish turned out to be impossible: Soon it turned out that the chess board Piles of grain of wheat that go beyond all imaginable boundaries.

The "wheat grain legend" is said to have occurred in India in the third or fourth century AD. Something similar is currently taking place in modern information processing, which has been developing rapidly since its inception. Surprisingly, it can be characterized in almost all aspects with the simple mathematical relationship described above: the computing power, the density of transistors and many other parameters double in a relatively short period of about two years. From a mathematical point of view, it is a power law, which is called “Moore's Law” after a founder of the Intel company. In contrast to the size of a stamp collection, which may have increased linearly over the years, there is an explosive growth, the real dynamics of which often only become apparent when the numbers actually reached become huge - as in the case of the wheat grain legend, where the real drama also emerges the last squares of the chessboard becomes visible: Each further doubling makes the number once reached more and more crazy.

How does the brain learn? In order to be able to answer this question, it is necessary to simulate neural networks of the highest complexity. The idea for this approach is not new: crucial preliminary work for the technical implementation was carried out by Gustav Robert Kirchhoff, namesake of the Heidelberg Kirchhoff Institute for Physics, as early as the middle of the 19th century.
Image: University Archives Image: University Archives

The question is: have we reached the bottom of the chessboard in information technology? This is a much discussed topic. However, modern technical developments indicate that the end is not yet in sight in the foreseeable future. The topic is not only interesting for computer experts - it is of immediate explosive nature for all of us - because information processing takes place not only on our laptops in the office, but also in our heads. The questions that arise in this context are fundamental: Could it be that my computer can one day be able to do more than my brain? Will I get addicted to computers? Could it even be that my computer will have its own intelligence in the future? Some of us would probably already answer the first two questions with a - limited - yes. The third question is currently the subject of science fiction novels.

Despite all the similarities, there are fundamental differences between a computer and a person that processes information. The brain does not contain a complete set of predefined software algorithms - it has to adapt to the respective life situations in a process of self-organization and can be enabled to achieve more or less remarkable achievements through learning processes. Our brains enable us to cope with completely new and unexpected situations - a task that conventional computers regularly fail. The brain can still work very efficiently even if it has been damaged and thus has a remarkable fault tolerance. A single faulty transistor in a microprocessor, on the other hand, can render the entire system useless. Finally, compared to a personal computer, the brain requires surprisingly little energy for its performance.


The differences between biological and electronic information processing become even more remarkable if one looks at the microscopic structure of the two systems. The artificial computer system designed by engineers consists of different, highly specialized units, such as different types of memory and arithmetic units. These units exchange binary data (i.e. zeros and ones) with one another, which represent both information and instructions for their processing. The computational rules for manipulating this binary data are described by the algebra devised by the mathematician George Boole in 1847. The exchange and processing of the binary data takes place strictly synchronously at a frequency whose cycle is specified by a central clock. The speed of these central clocks has increased significantly in recent years and is now a few billion cycles per second. This unit is known as gigahertz (GHz). Due to the synchronous clock, the status of such a computer can be completely saved at any point in time and transferred to other systems. The architecture of modern computers basically follows the von Neumann machine described by the physicist and mathematician John von Neumann around 1945.



What is the brain's concept for processing information? The complexity of information processing - that much is certain - is based less on high speed than on large numbers. Computers also generate complexity from an extremely large number of networked transmission elements. For example, very large computers made up of tens of thousands of individual computers predict the weather or analyze economic systems.

The microscopic structure of the human brain is significantly different from the computer in both its architecture and its dynamics. For the cerebral cortex, the "cortex" of humans, spatial mapping has long been available, which shows, for example, where certain sensory impressions or motor functions are processed. Despite this functional specialization, the cortex system, viewed microscopically, shows astonishing spatial uniformity - this alone distinguishes it from a microprocessor with its clearly distinguishable units. The 10 or so12 (1000 billion) nerve cells (neurons) of the brain are about 1015 (1000 times 1000 billion) synaptic connections linked together. The spatial distance between the nerve cells is overcome by a network of axons (leading connections) and dendrites (leading connections). The cortex is a few millimeters thick and has six layers throughout, each with characteristic cell types and connecting structures.

It is noteworthy that the communication between the nerve cells - just like with a computer - takes place at least partially with the help of standardized electrical pulses. Such so-called action potentials or “spikes” correspond to relatively low electrical voltages (around 0.1 volts) and are very slow compared to the electronic model. Switching such a pulse on and off takes a thousandth of a second; a typical nerve cell does not transmit much more frequently than about ten times per second. At first glance, the standardized pulses suggest a direct comparison with the computer - but there is one key difference: the possible point in time for the emission of an action potential is not specified in the biological system by a central clock. In other words, the neural network is asynchronous.

Gordon Moore, the founder of the American company Intel, created "Moore's Law". It says that the computing power, the density of transistors and many other parameters of modern information processing double in a relatively short period of about two years.

But where does it get the order to "fire" from? This command comes from the network itself! A nerve cell collects signals from up to 10,000 neurons via its widely branched dendrite tree, with the synapses functioning as intelligent switching points between the nerve cells. The electrical signals now try to charge a “capacitor” - the membrane of the nerve cell - electrically. However, there is a lot of competition: some synapses contribute to electrical charging (excitatory synapses), while others draw charge (inhibitory synapses). The nerve cell's capacitor is also somewhat “leaky” and loses charge by itself (leakage currents). Despite this competitive situation for the charge in the nerve cell, the electrical voltage of the membrane will at some point reach a magical value: the “fire threshold” for sending out an action potential. The electrical unit pulse is sent into the network via the axon. Then the process of charging the membrane begins again.

2,500 gigabytes per second

It is obvious that such a network of around 1,000 billion nerve cells cannot simply be described by an analytical mathematical model. The situation is even more hopeless than trying to describe the movement of all particles in a balloon. While in a balloon with about 1023 Particles a thermal equilibrium prevails in which simple global variables such as temperature and pressure can be linked using simple equations, a comparable approach is hardly possible with the neural network. 1,000 billion neurons with a rate of fire of one hertz generate an amount of information of 2,500 gigabytes per second if an information requirement of only 20 bits is assumed for the description of the place and time of an action potential. This information is in some way responsible for what we call the amazing power of the brain. We will hardly be able to track down the question of the functional principle of the brain with simple global variables.

The complexity of information processing in the brain is therefore probably based on large numbers - and not on high speed. Systems that generate complexity from an extremely large number of networked transmission elements are also referred to as massively parallel. Parallelism is a current and rapidly growing field of work in modern information technology. Linking computers to form very large clusters consisting of tens of thousands of individual computers makes it possible today to analyze the weather, combustion processes or economic and social systems. In the competition for the most powerful cluster in the world, the “TOP 500 list” is created every year, which is currently topped by an IBM BlueGene / L computer at the American Lawrence Livermore Laboratory with 131 072 processors. It makes sense to use such systems to simulate and better understand neural networks. This is currently being done in a number of research projects. The best known is probably the BlueBrain project that neuroscientist Henry Markram has just started at the Ecole Polytechnique Fédérale de Lausanne. Eight thousand IBM processors are available to him to simulate large networks.

Obviously, even a hundred thousand processors are still not enough - compared to 1012 Nerve cells in the cortex. With the simple equation “one neuron = one processor” it will not be possible to simulate networks of really biological complexity anytime soon. This would require ten million times more computer nodes in the network. Of course, a modern processor can simulate many nerve cells at the same time, i.e., to a certain extent, represent a sub-network. However, this necessarily slows down the simulation. It is indeed the case that the simulation of networks with extremely high complexity is possible - but runs considerably more slowly than the processes of the biological model. This makes it difficult to study learning processes because they are already slow in the biological system: learning takes hours, if not days or months.

The scientists at the Heidelberg Kirchoff Institute for Physics are currently developing a neurocomputer with half a million nerve cells and a billion connections (synapses).

A condenser as a model

Our working group at the Heidelberg Kirchhoff Institute for Physics has been pursuing an alternative approach for several years to simulate neural networks of the highest complexity. The principle: If the function of a nerve cell can be compared with an electrical capacitor, then this capacitor could not only be simulated with the help of a computer, but a real capacitor could be used straight away. So it's about building electronic circuits that work like cells and biological networks. The idea for this approach is old: It goes back to the physicist Carver Mead, a student of the American Nobel Prize winner Richard Feynman, who built such systems at the California Institute of Technology, or CalTech for short, in the 1980s. The principles for transporting electrical charges are even older. They were developed in the middle of the 19th century by the namesake of the Heidelberg Institute, the physicist Gustav Kirchhoff, in the form of the well-known Kirchhoff rules.

In the simplest model, a nerve cell is an electrical capacitor with a resistor connected in parallel. The open-circuit voltage is defined by a battery, the excitatory and inhibitory synapses are currents flowing in and out, and the fire threshold is implemented by a modern electronic circuit, a so-called comparator. All these elements do not have to be soldered together from individual electronic components: Modern microelectronics make it possible to create a neural network from “one piece” in a process that is also used for the production of chips in telephones and washing machines. This technology is known as "Very Large Scale Integration". The “large scale” here means the number of components per chip area. This is the most important property for the production of complex neurocircuits.

So far, when designing an “equivalent circuit”, we only spoke of nerve cells. But the contact points, the synapses, are at least as important. For two reasons: On the one hand, there are many thousands of times more synapses than nerve cells. A banal argument like the space requirement suddenly becomes the determining element of feasibility. On the other hand, the synapses are at least partially responsible for the fact that neural networks organize themselves independently. Neuroscientists refer to this ability as "plasticity". For the “Heidelberg Neurochip”, plasticity is the most important challenge. Both the nerve cells and the synapses are implemented by us as analog circuits. So in a way it is a return to the old idea of ​​the analog computer. Because of the identical functionality of the biological model, this is an interesting idea.

The "Heidelberg Network Chip": 5x5 mm in size, with 384 artificial nerve cells and 100,000 synapses

However, one element is still missing: the action potential - the actual information in the network. Of course, the action potentials in the biological system are also analog voltage curves, but they are all identical with very good accuracy. Action potentials in the biological network actually only carry two pieces of information: “where from?” And “when?”. This where-and-when corresponds exactly to the electrical principle in our familiar computer networks. There, too, all impulses are the same - only the pattern counts. For this reason we have decided to combine analog and digital electronics in the neurochip: The local information processing in the cells (synapses and neurons) is analog, the communication via action potentials takes place with conventional digital electronics. The advantage of this approach is obvious: through the use of perfectly developed switching technology for the transmission of digital data at the highest speeds, networks that are scalable to any size can be built up to biologically relevant orders of magnitude.

The synthetic neurocircuits described have a number of remarkable properties. The capacitors used in the nerve cell model are extremely small. Much smaller than commercially available individual components. Small capacitors charge and discharge very quickly. Compared to the biological model, the Heidelberg network chips run around a hundred thousand times faster. In order to examine brain processes such as plasticity and learning processes, this property is extremely important. Because the neural dynamics of a day can be compressed into a second on the chip.In this way, the function of networks with many changed parameters can be systematically investigated. Another important property is energy absorption. In contrast to solving a mathematical differential equation, charging a capacitor requires little energy. A model of a neural network that is analog over long distances is therefore just as energy-efficient as the biological model.

Artificial nerve cells

All of this work has to be carried out in an interdisciplinary manner: physicists know little about neurobiology “inherently”, computer scientists are rather unfamiliar with analog microelectronics, and neuroscientists usually have little to do with building large electronic systems. The European Union has therefore launched the “Future Emerging Technologies” funding program, in which fundamentally new principles of information processing are to be developed in an interdisciplinary manner. “Quantum information” and “biologically inspired systems” are two examples of funding projects in which Heidelberg groups are involved. In this context, the working group of the Kirchhoff Institute coordinates the so-called FACETS project, where FACETS stands for “Fast Analog Computing with Emergent Transient States”. Fifteen European groups from the fields of neurobiology, computer science, physics and electrical engineering are funded in this project with around ten million euros for four years. More information can be found on the Internet at

What exactly is this project supposed to be about? The first thing to do is to build a synthetic neural network based on the electronic principle described. In the first stage of development, many individual, five by five millimeter network chips are combined to form a “neurocomputer” which, with around one hundred thousand artificial nerve cells and 25 million synapses, corresponds to a volume of the human cortex of around one cubic millimeter. This system is built into mechanical frames, the appearance of which is still very reminiscent of conventional computers. The artificial nerve cells in this system, however, already closely resemble the biological model and master various mechanisms of plasticity. A prototype chip is already functional and helps us to carry out preparatory experiments. For example, the plasticity and the characteristic behavior of the neuronal membrane tension could already be proven experimentally.

In the second stage of development, the common approach in computers of interconnecting individual microchips is to be abandoned in favor of a large, almost uniform silicon substrate. A twenty centimeter silicon wafer will house around half a million artificial nerve cells and a billion synapses, which will be built with the help of 180 nanometer structures (transistors). An electronic circuit is mounted above this silicon wafer, which is used to configure the network, analyze the activity of the network and establish long-range connections in the network. In principle, one hundred or more such systems can create artificial networks with ten million neurons and 100 billion synapses. This would correspond to a thousand cubic millimeters of cortex and would make up around ten percent of the region of the human cortex that is responsible for part of human visual performance. Such technical and physical development work is possible because we are in constant scientific exchange with colleagues from neurobiology, which enables us to implement the latest neurobiological results directly in electronic circuits.

As a result, we hope to gain new knowledge in two areas of work that are quite different from each other. On the one hand, the functional principles of neural information processing are to be experimentally investigated and compared with biological results. Insightful insights could arise above all from the new possibility of tracking the dynamics of complex neural circuits over large time scales. The biological time scale from milliseconds to years can be compressed to nanoseconds to minutes by the new electronics.

This offers an approach to examine processes of self-organization and learning. The second project objective is of particular interest to the physicist: the development of new architectures for processing information in which properties such as fault tolerance and energy efficiency are already "conceptually" built into. For the future use of new components, such as molecular switches or carbon nanotubes, such concepts may become very important - and computers based on the model of the human brain may one day be possible after all.

Karlheinz Meier is Professor of Experimental Physics at the Faculty of Physics and Astronomy at Heidelberg University. He is the founder of the Kirchhoff Institute for Physics and the Heidelberg ASIC Laboratory for Microelectronics. The work in the field of biologically inspired information processing resulted from the development of equipment in elementary particle physics, in which large microelectronic systems for information processing are also developed and built.








[email protected], phone: 0 62 21/54 98 31