According to this article from MIT Technology Review, philosopher David Chalmers, famous for the "hard problem" of identifying the essence of consciousness, gives artificial intelligence better than one chance in five of developing consciousness in the next 10 years. "AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. ... The cerebellum, a brain region at the base of the skull that resembles a fist-size tangle of angel-hair pasta, appears to play no role in conscious experience, though it is crucial for subconscious motor tasks like riding a bike; on the other hand, feedback connections—for example, connections running from the “higher,” cognitive regions of the brain to those involved in more basic sensory processing—seem essential to consciousness. (This, by the way, is one good reason to doubt the consciousness of LLMs: they lack substantial feedback connections.) ... Every expert has a preferred theory of consciousness, but none treats it as ideology—all of them are eternally alert to the possibility that they have backed the wrong horse. In the past five years, consciousness scientists have started working together on a series of “adversarial collaborations,” in which supporters of different theories come together to design neuroscience experiments that could help test them against each other. The researchers agree ahead of time on which patterns of results will support which theory. Then they run the experiments and see what happens. ... In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong."
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: