In some senses, the fundamental question in cognition is not how neurons in the brain manipulate information, but why the ones in the rest of the body don’t, because the nerve fibres in both areas are functionally identical.
Neurons perform one physiologically simple task: they conduct electricity. They are the wires in the circuits of the body. This is frequently and disastrously misunderstood in discussion of neural coding, an error that Crick described as the myth of “the overwise neuron” (Crick, 1979).
THE PROPERTIES OF A NEURON
In a neural pathway, a neuron does not read out, receive, nor manipulate information, nor make decisions about it in any way, because it doesn't have the equipment to do that in the observable world we know.
Functionally, a neuron is not a computer nor a brain or a processor, or even a switch: it is a connector. If it were in a circuit, it would be a piece of wire, or in a computer, a printed circuit. There is no evidence (please, provide some, make my day) that neurons can perform any form of manipulation of information, but in cognitive science there is an evangelistic insistence that they somehow must. The reason for this is not empirical evidence, but the need for the theories that depend on this mistake to survive.
NEURONS TRANSMIT ELECTRICITY, NOT INFORMATION
When a neuron receives an electrical charge, it can pass it onwards to another neuron. This is a simple transmission of energy. The downstream neuron does not receive the information that a charge has been received any more than a lightbulb receives information that it is being switched on, or than a ball receives information that it's being kicked. It's not information that's being passed on, it's energy.
INFORMATION IS EXTRINSIC TO CODE
Electrical impulses, if they are part of a coding system, may be used to refer to information, but they do not contain it. The letters that you're looking at are a code - the information isn't contained in the words on your screen. The letters are a reference to information extrinsic to them. Consider that if you didn't speak English, no information would be conveyed - the difference in information there is you, not these letters.
To put it another way, I could arrange some stones to spell the word "HELP". The stones themselves are physically identical when they're in a pile or arranged as a word. They don't suddenly change composition when arranged. They refer to information that you already know: namely, what "HELP" means.
THE MAGIC OF THE NEURONS
In the animal body, neurons are the stones. They do not contain, transmit, process, manipulate or juggle information in any way. (This is easily falsifiable, of course.)
Despite this catastrophic lack of evidence, neural coding theories rely entirely on the idea that when in the brain, neurons suddenly become able to transmit information. Not only that, but they have byzantine abilities far beyond what their physiology allows, or indeed what is physically possible.
When I say "physically possible", I mean that there is no known process by which it could happen. Of course, there could be new processes at work that science hasn't discovered yet, but science cannot deal in non-existent phenomena. In future there could be aliens, time travel or immortality pills, but they can't be allowed into science. We must deal with evidence - evidence that exists now.
The ignoring of basic reality, and the insistence that neural activity must somehow be a code has led investigators down ever-more insane alleyways trying to make theories that fit.
In some theories, not only do neurons receive information, they interpret it!
“This decoding is analogous to the task a neuron downstream might perform when “reading out” the spike trains that are its inputs.” (DeCharms & Zador, 2000).
Can you, with several billion more neurons than this lonely downstream neuron, decode a spike train? Could a fly? The fly brain has 140,000 more neurons than just one.
Other models (e.g., Georgopoulos et al., 1986): suggest that activity of sensory neurons is "measured" (by what and using what impossible process is left to the imagination) and an average of the signals is taken. Not only that, but characteristics of their firing are analysed too. The average is taken by “asking” each neuron the likelihood that its firing was a response to a particular stimulus, and then combining all answers. Every neuron's unique portion of the likelihood is the product of its firing rate and the logarithm of its responses. The output neuron with the highest activity is then determined by a winner-takes-all rule, and compared to a known likelihood of the identity of the thing it’s responding to.
In this model, then, individual neurons are able to compute, and to do what 100 billion neurons – in the form of a human brain – have never accomplished without significant equipment and extra processing capacity. A single neuron in this model is able to: detect its own rate of firing; detect its own tuning curve; calculate the logarithm (can you do that without at least a pencil?) of that tuning curve and then multiply its own rate of firing by the logarithm of its own tuning curve; and all with no substantially different physiology to that of a neuron in your elbow.
As with many other cases of neural coding, the computation of the output figure would require multiple complex processes and these would involve a computational mechanism within the neuron: it would need to detect and then somehow store its own firing rate. It would also require an ability to measure time - an internal clock. This computation would require circuitry of some sort to transport that information from one part of the neuron to the other to some sort of internal “brain”. Furthermore, it would require a means of expressing the product that the neuron simply does not have. The output of any neuron is either on or off (Adrian & Zotterman, 1926) – one or zero - it is not a changeable quantity nor an analogue of one.
Another theory that endows its neurons with extraordinary powers is the rate-coding hypothesis (Adrian & Zotterman, 1926; Britten,’ et al., 1992) (Rolls & Tovee, 1995). In this paradigm, stimuli are encoded as a spike train whose mean rate of fire is reduced to a single value.
This would mean that a single efferent neuron would have to: have some means of recording the number of inbound action potentials; record that they were separate events and not one accumulated charge; record over what time period they occurred; compare that number of action potentials to the timed period to get a rate. Furthermore, it would have to know over what period to record those firings, because presumably a shorter recording time would result in a less accurate mean.
In the independent coding hypothesis (Georgopoulos et al., 1986) although sensory stimuli tend to correlate with the firing of large populations of neurons, each individual carries unique information independently of any other neuron and again "votes" for the movement direction to which it is tuned. We should remember that this calculation - as with other theories - would tax the circuitry, equipment, and processing ability of a human being, but it is posited to be happening in a single cell.
Consider the model of how neurons recognise stimuli in the outside world suggested by Jazayeri & Movshon (2006). In this very typical model, neurons compute the likelihood of the identity of an object by choosing from a bunch of likely alternatives on the basis of probability.
Can you compute probability? If you hear, for example, a crying sound in the night, could you compute the probability that it's an owl rather than a baby? Could you do that without any equipment? While we're at it, could you do it with equipment?
NEURONAL ABILITIES ARE ASSUMED, NOT OBSERVED
ความคิดเห็น