Why does consciousness matter




















Daniel C. Dennett considers all of these as necessary conditions of moral personhood. This explains why consciousness does not matter for this position. For it is not plausible to claim that current robots matter morally for their own sake as long as they lack characteristics such as sentience or consciousness.

This may change in the future, however. There is already an interesting and controversial discussion going on about ascribing legal personhood to robots Bryson et al. For the debate on the moral and legal status of robots, but also for the broader question of how to respond to and interact with machines, a better understanding of artificial consciousness, artificial rationality, artificial sentience, and similar concepts is needed. We need to talk more about artificial consciousness and the lack of consciousness in current AI and robots.

In this, focusing on third-person definitions of artificial consciousness and access consciousness will prove particularly helpful. The author confirms being the sole contributor of this work and has approved it for publication. The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Block, N. On a confusion about the function of consciousness, behavioral and brain.

Sciences 18, — CrossRef Full Text. Bryson, J. Of, for, and by the people: the legal lacuna of synthetic persons. Law 25, — Chalmers, D. Google Scholar. Chatila, R. Toward self-aware robots. Coeckelbergh, M. Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf. Darling, K. Calo, M. Froomkin, and I. Kerr Edward Elgar. Dehaene, S. What is consciousness, and could machines have it?

Science , — Dennett, D. Gennaro, R. Grigore, E. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, and A. Leuski Cham: Springer. Gunkel, D. This additional factor is found in the gray matter making up the celebrated cerebral cortex, the outer surface of the brain. It is a laminated sheet of intricately interconnected nervous tissue, the size and width of a inch pizza.

Two of these sheets, highly folded, along with their hundreds of millions of wires—the white matter—are crammed into the skull. All available evidence implicates neocortical tissue in generating feelings. We can narrow down the seat of consciousness even further. Take, for example, experiments in which different stimuli are presented to the right and the left eyes. Suppose a picture of Donald Trump is visible only to your left eye and one of Hillary Clinton only to your right eye.

We might imagine that you would see some weird superposition of Trump and Clinton. In reality, you will see Trump for a few seconds, after which he will disappear and Clinton will appear, after which she will go away and Trump will reappear. The two images will alternate in a never-ending dance because of what neuroscientists call binocular rivalry.

Because your brain is getting an ambiguous input, it cannot decide: Is it Trump, or is it Clinton? If, at the same time, you are lying inside a magnetic scanner that registers brain activity, experimenters will find that a broad set of cortical regions, collectively known as the posterior hot zone, is active. These are the parietal, occipital and temporal regions in the posterior part of cortex [see box on opposite page] that play the most significant role in tracking what we see.

Curiously, the primary visual cortex that receives and passes on the information streaming up from the eyes does not signal what the subject sees. A similar hierarchy of labor appears to be true of sound and touch: primary auditory and primary somatosensory cortices do not directly contribute to the content of auditory or somatosensory experience.

Instead it is the next stages of processing—in the posterior hot zone—that give rise to conscious perception, including the image of Trump or Clinton. More illuminating are two clinical sources of causal evidence: electrical stimulation of cortical tissue and the study of patients following the loss of specific regions caused by injury or disease.

Stimulating the posterior hot zone can trigger a diversity of distinct sensations and feelings. These could be flashes of light, geometric shapes, distortions of faces, auditory or visual hallucinations, a feeling of familiarity or unreality, the urge to move a specific limb, and so on.

Stimulating the front of the cortex is a different matter: by and large, it elicits no direct experience. A second source of insights are neurological patients from the first half of the 20th century. Surgeons sometimes had to excise a large belt of prefrontal cortex to remove tumors or to ameliorate epileptic seizures. What is remarkable is how unremarkable these patients appeared.

The loss of a portion of the frontal lobe did have certain deleterious effects: the patients developed a lack of inhibition of inappropriate emotions or actions, motor deficits, or uncontrollable repetition of specific action or words.

Following the operation, however, their personality and IQ improved, and they went on to live for many more years, with no evidence that the drastic removal of frontal tissue significantly affected their conscious experience.

Conversely, removal of even small regions of the posterior cortex, where the hot zone resides, can lead to a loss of entire classes of conscious content: patients are unable to recognize faces or to see motion, color or space. So it appears that the sights, sounds and other sensations of life as we experience it are generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there.

What is the crucial difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content? The truth is that we do not know. Even so—and excitingly—a recent finding indicates that neuroscientists may be getting closer. An unmet clinical need exists for a device that reliably detects the presence or absence of consciousness in impaired or incapacitated individuals. During surgery, for example, patients are anesthetized to keep them immobile and their blood pressure stable and to eliminate pain and traumatic memories.

Unfortunately, this goal is not always met: every year hundreds of patients have some awareness under anesthesia. Another category of patients, who have severe brain injury because of accidents, infections or extreme intoxication, may live for years without being able to speak or respond to verbal requests.

Establishing that they experience life is a grave challenge to the clinical arts. His damaged radio does not relay his voice, and he appears lost to the world. This is the forlorn situation of patients whose damaged brain will not let them communicate to the world—an extreme form of solitary confinement. In the early s Giulio Tononi of the University of Wisconsin—Madison and Marcello Massimini, now at the University of Milan in Italy, pioneered a technique, called zap and zip, to probe whether someone is conscious or not.

A network of electroencephalogram EEG sensors, positioned outside the skull, recorded these electrical signals. As they unfolded over time, these traces, each corresponding to a specific location in the brain below the skull, yielded a movie.

These unfolding records neither sketched a stereotypical pattern, nor were they completely random. Remarkably, the more predictable these waxing and waning rhythms were, the more likely the brain was unconscious. Massimini and Tononi tested this zap-and-zip measure on 48 patients who were brain-injured but responsive and awake, finding that in every case, the method confirmed the behavioral evidence for consciousness.

The team then applied zap and zip to 81 patients who were minimally conscious or in a vegetative state. For the former group, which showed some signs of nonreflexive behavior, the method correctly found 36 out of 38 patients to be conscious. It misdiagnosed two patients as unconscious. Of the 43 vegetative-state patients in which all bedside attempts to establish communication failed, 34 were labeled as unconscious, but nine were not. Their brains responded similarly to those of conscious controls—implying that they were conscious yet unable to communicate with their loved ones.

Ongoing studies seek to standardize and improve zap and zip for neurological patients and to extend it to psychiatric and pediatric patients. Following the philosopher David Chalmers, we call it the hard problem of consciousness. But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter.

What is physical matter in and of itself, behind the mathematical structure described by physics? But a closer look reveals that they might be deeply connected. C onsciousness is a multifaceted phenomenon, but subjective experience is its most puzzling aspect. Our brains do not merely seem to gather and process information.

They do not merely undergo biochemical processes. Rather, they create a vivid series of feelings and experiences, such as seeing red, feeling hungry, or being baffled about philosophy. Our own consciousness involves a complex array of sensations, emotions, desires, and thoughts. But, in principle, conscious experiences may be very simple. An animal that feels an immediate pain or an instinctive urge or desire, even without reflecting on it, would also be conscious.

Our own consciousness is also usually consciousness of something—it involves awareness or contemplation of things in the world, abstract ideas, or the self. But someone who is dreaming an incoherent dream or hallucinating wildly would still be conscious in the sense of having some kind of subjective experience, even though they are not conscious of anything in particular.

Philosophers and neuroscientists often assume that consciousness is like software, whereas the brain is like hardware. Where does consciousness—in this most general sense—come from? Modern science has given us good reason to believe that our consciousness is rooted in the physics and chemistry of the brain, as opposed to anything immaterial or transcendental.

In order to get a conscious system, all we need is physical matter. Put it together in the right way, as in the brain, and consciousness will appear. But how and why can consciousness result merely from putting together non-conscious matter in certain complex ways? This problem is distinctively hard because its solution cannot be determined by means of experiment and observation alone. Through increasingly sophisticated experiments and advanced neuroimaging technology, neuroscience is giving us better and better maps of what kinds of conscious experiences depend on what kinds of physical brain states.

But in all these theories, the hard problem remains. How and why does a system that integrates information, broadcasts a message, or oscillates at 40 hertz feel pain or delight?

The appearance of consciousness from mere physical complexity seems equally mysterious no matter what precise form the complexity takes.

Nor would it seem to help to discover the concrete biochemical, and ultimately physical, details that underlie this complexity. No matter how precisely we could specify the mechanisms underlying, for example, the perception and recognition of tomatoes, we could still ask: Why is this process accompanied by the subjective experience of red, or any experience at all?

In principle, we can see that understanding them is fundamentally a matter of gathering more physical detail: building better telescopes and other instruments, designing better experiments, or noticing new laws and patterns in the data we already have. If we were somehow granted knowledge of every physical detail and pattern in the universe, we would not expect these problems to persist. They would dissolve in the same way the problem of heritability dissolved upon the discovery of the physical details of DNA.

But the hard problem of consciousness would seem to persist even given knowledge of every imaginable kind of physical detail. I n this way, the deep nature of consciousness appears to lie beyond scientific reach. We take it for granted, however, that physics can in principle tell us everything there is to know about the nature of physical matter.

Physics tells us that matter is made of particles and fields, which have properties such as mass, charge, and spin. Physics may not yet have discovered all the fundamental properties of matter, but it is getting closer. Yet there is reason to believe that there must be more to matter than what physics tells us. Broadly speaking, physics tells us what fundamental particles do or how they relate to other things, but nothing about how they are in themselves, independently of other things.

Charge, for example, is the property of repelling other particles with the same charge and attracting particles with the opposite charge. In other words, charge is a way of relating to other particles. Similarly, mass is the property of responding to applied forces and of gravitationally attracting other particles with mass, which might in turn be described as curving spacetime or interacting with the Higgs field. These are also things that particles do or ways of relating to other particles and to spacetime.

Conscious experiences are just the kind of things that physical structure could be the structure of. In general, it seems all fundamental physical properties can be described mathematically. Galileo, the father of modern science, famously professed that the great book of nature is written in the language of mathematics.

Yet mathematics is a language with distinct limitations. It can only describe abstract structures and relations.

Similarly, all we know about a geometrical object such as a node in a graph is its relations to other nodes. In the same way, a purely mathematical physics can tell us only about the relations between physical entities or the rules that govern their behavior.

One might wonder how physical particles are , independently of what they do or how they relate to other things. What are physical things like in themselves , or intrinsically? Some have argued that there is nothing more to particles than their relations, but intuition rebels at this claim.

For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air. In other words, physical structure must be realized or implemented by some stuff or substance that is itself not purely structural.

Otherwise, there would be no clear difference between physical and mere mathematical structure, or between the concrete universe and a mere abstraction.



0コメント

  • 1000 / 1000