Will machines ever achieve consciousness? How would we know? Does it matter? This piece from MIT Technology Review considers the issues:
"[W]hile conscious machines may still be mythical, we should prepare for the idea that we might one day create them. ... No matter how strong my conviction that other people are just like me—with conscious minds at work behind the scenes, looking out through those eyes, feeling hopeful or tired—impressions are all we have to go on. Everything else is guesswork. ... First-person, subjective experience—the feeling of being in the world—is known as 'phenomenal' consciousness. Here we can group everything from sensations like pleasure and pain to emotions like fear and anger and joy to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door. ... My conception of a conscious machine was undeniably—perhaps unavoidably—human-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI? ... As the philosopher Thomas Nagel noted, it must 'be like' something to be a bat, but what that is we cannot even imagine—because we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but that’s still not what it must be like for a bat, with its bat mind. ... If AIs ever do gain consciousness (and we take their word for it), we will have important decisions to make. ... Would it be ethical to retrain a conscious machine if it meant deleting its memories? Could we copy that AI without harming its sense of self? What if consciousness turned out to be useful during training, when subjective experience helped the AI learn, but was a hindrance when running a trained model? Would it be okay to switch consciousness on and off? This only scratches the surface of the ethical problems. Many researchers, including [philosopher Daniel] Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal. ... It’s possible that one day there could be as many forms of consciousness as there are types of AI. But we will never know what it is like to be these machines, any more than we know what it is like to be an octopus or a bat or even another person. There may be forms of consciousness we don’t recognize for what they are because they are so radically different from what we are used to. ... And we may decide that we’re happier with [unconscious] zombies. As Dennett has argued, we want our AIs to be tools, not colleagues. 'You can turn them off, you can tear them apart, the same way you can with an automobile,' he says. 'And that’s the way we should keep it.'" www.technologyreview.com/2021/08/25/1032111/conscious-ai-can-machines-think
0 Comments
Leave a Reply. |
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
December 2023
Categories
All
|