With a new appointee to the U.S. Supreme Court, the term "judicial philosophy" will be thrown about a lot. What does it really mean? This article provides a nice summary of the six major schools of judicial philosophy: chooseyourjudges.org/facts-2/glossary-of-judicial-philosophies/
0 Comments
This piece, by a professor of ethics at Seton Hall, considers the complexities involved in judging thinkers of the past:
"How should we evaluate controversial thinkers of the past? ... A good example of this is the philosopher Immanuel Kant (1724-1804), who is esteemed for his fundamental contributions to moral theory, among other things. Indeed, his theory of the Categorical Imperative means that he is generally counted among the greatest moral philosophers of all times. So it may come as a surprise to learn that Kant also wrote things that would today be characterized as seriously immoral, as some academics have been noting for a while now. For instance, Kant wrote of his belief in the superiority of white Europeans over other races (‘Of the Different Human Races’, 1777). ... The question we have to answer in light of such statements is, What standards should we have for the thinkers of the past? There are some common answers that I think are too simple, and the reality is that working out how to respond is more difficult than some have realized. ... What I think we need to do at this point is make a distinction between a person and their ideas. Specifically, there is a difference between an individual, and the ideas or theories written or documented in their work. ... This point is important, because the fact that an individual said something morally wrong does not necessarily show that a theory of theirs is objectionable and should be rejected. We should allow that people who are utterly wrong about something can still be right about other matters. One way of illustrating this is to consider Albert Einstein. It was noted recently by Peter Dreier (‘Was Albert Einstein a Racist?’, The American Prospect, 2018) that Einstein at one point held some opinions unacceptable by today’s standards; in his personal writings he made inappropriate statements about certain groups. Although this is saddening coming from such an otherwise inspiring scientist and human being, I presume it does not disprove his General Theory of Relativity nor imply that we should no longer teach it. ... We can think of there being a range of possible cases to consider here. I would say that if someone says something objectionable in a way that’s not related to their theory – as in the Einstein case – then that statement doesn’t mean the theory itself is problematic. In this case, we can think of the statement as at worst being incidental or tangential to the theory. Next, if someone says something objectionable that is of marginal relevance to their theory, this is unfortunate, but this also should not confuse us over the truth or value of the broader theory. In a third case, if someone says something objectionable that is centrally related to their theory – think of someone defending eugenics, for example – then we have reason to believe the theory itself is problematic and should be rejected. This approach will permit us to dismiss views that are ‘centrally’ problematic, while still leaving room for discussion in other kinds of cases. ... Our present difficulty exists because the past is comprised of individuals who are flawed, like all humans, and how people respond to their historical circumstances can be complicated. In reflecting on historical thinkers like Kant, we need to engage with that complexity, including the complex relationship between the individual’s prejudices and the more timeless theories they have left for posterity." philosophynow.org/issues/148/Should_Kant_Be_Canceled We are coming up on the birthday of Giovanni Pico della Mirandola, a philosopher of the Italian Renaissance who was instrumental in re-popularizing the works of Plato. Giovanni was born to the ruling family of the independent Duchy of Modena, in northern Italy, in 1463. At the age of 23, Giovanni published his 900 Theses, which put man "at the center of the world." The 900 Theses was later called "the manifesto of the Renaissance" and was the first printed book banned by the Roman Catholic Church. He died at 31, likely of arsenic poisoning.
Philosophers have long wrangled with issues surrounding a god: what does reason tell us about the existence of a god (or more than one god) and what can we deduce about the nature of any god? This short video created by British philosopher Stephen Law considers the nature of a god and asks if the existence of an all-evil god is any more or less likely than the existence of an all-benevolent god if one accepts the premise of free will in explaining suffering. aeon.co/videos/what-if-anything-makes-an-all-good-god-less-absurd-than-an-all-evil-one
At the end of the day, what is the difference between "reality" and "virtual reality"? And why does it matter?
"[David] Chalmers is a professor of philosophy at New York University, and he has spent much of his career thinking about the mystery of consciousness. ... Chalmers says that he began thinking deeply about the nature of simulated reality after using V.R. headsets like Oculus Quest 2 and realizing that the technology is already good enough to create situations that feel viscerally real. Virtual reality is now advancing so quickly that it seems quite reasonable to guess that the world inside V.R. could one day be indistinguishable from the world outside it. Chalmers says this could happen within a century; I wouldn’t be surprised if we passed that mark within a few decades. Whenever it happens, the development of realistic V.R. will be earthshaking, for reasons both practical and profound. The practical ones are obvious: If people can easily flit between the physical world and virtual ones that feel exactly like the physical world, which one should we regard as real? You might say the answer is clearly the physical one. But why? Today, what happens on the internet doesn’t stay on the internet; the digital world is so deeply embedded in our lives that its effects ricochet across society. After many of us have spent much of the pandemic working and socializing online, it would be foolish to say that life on the internet isn’t real. ... "We already have quite a bit of evidence that people can construct sophisticated realities from experiences they have over a screen-based internet. Why wouldn’t that be the case for an immersive internet? This gets to what’s profound and disturbing about the coming of V.R. The mingling of physical and digital reality has already thrown society into an epistemological crisis — a situation where different people believe different versions of reality based on the digital communities in which they congregate. How would we deal with this situation in a far more realistic digital world? Could the physical world even continue to function in a society where everyone has one or several virtual alter egos? I don’t know. I don’t have a lot of hope that this will go smoothly. But the frightening possibilities suggest the importance of seemingly abstract inquiries into the nature of reality under V.R. We should start thinking seriously about the possible effects of virtual worlds now, long before they become too real for comfort."www.nytimes.com/2022/01/26/opinion/virtual-reality-simulation.html Is it okay to be mean to someone if the other "person" isn't human?
"The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endless series of provocative headlines. Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online. ... Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships. ... "Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms. ... In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once. On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic. ... “There are a lot of studies being done… about how a lot of these chatbots are female and [have] feminine voices, feminine names,” [AI ethicist and consultant Olivia] Gambelin said. Some academic work has noted how passive, female-coded bot responses encourage misogynistic or verbally abusive users. ... "But what to think of the people that brutalize these innocent bits of code? For now, not much. As AI continues to lack sentience, the most tangible harm being done is to human sensibilities. But there’s no doubt that chatbot abuse means something. ... And although humans don’t need to worry about robots taking revenge just yet, it’s worth wondering why mistreating them is already so prevalent." futurism.com/chatbot-abuse The topic for this year's Great American Think-Off has been announced: "Which should be more important: personal choice or social responsibility?" The Think-Off is an annual contest sponsored by a tiny town in central Minnesota and is open to people of all ages and backgrounds. Submissions are due by Apr. 1. For more information, see www.kulcher.org/2022-great-american-think-off-question/
This article introduces paraconsistent logic, the idea, borrowed from formal mathematics, that paradoxes can be accepted into an argument rather than necessarily eliminated or resolved for an argument to proceed.
"Suppose you are waiting for a friend. They said they would meet you around 5pm. Now it is 5:07. Your friend is late. But then again, it is still only a few minutes after 5pm, so really, your friend is not late yet. Should you call them? It is a little too soon, but maybe it isn’t too soon … because your friend is both late and not late. (What they’re not is neither late nor not late, because you are clearly standing there and they clearly haven’t arrived.) ... Often, there are two or more competing theories to explain some given data. How do we decide which to adopt? A standard account from Thomas Kuhn, in 1977, is that we weigh up various theoretical virtues: consistency, yes, but also explanatory depth, accord with evidence, elegance, simplicity, and so forth. Ideally, we might have all of these, but criteria such as simplicity will be set aside if it is outweighed by, say, predictive power. And so too for consistency, say paraconsistent logicians such as Priest and Sylvan. Any of the theoretical virtues are virtuous only to the extent that they match the world. For example, all else being equal, a simpler theory is better than a more complicated one. But ‘all else’ is rarely equal, and as people from Aristotle to David Hume point out, the simpler theory is only better to the extent that the world itself is simple. If not, then not. So too with consistency. The virtue of any given theory then will be a matter of its match with the world. But if the world itself is inconsistent, then consistency is no virtue at all. If the world is inconsistent – if there is a contradiction at the bottom of logic, or at the bottom of a bowl of cereal – a consistent theory is guaranteed to leave something out." aeon.co/essays/paraconsistent-logics-find-structure-in-our-inconsistent-world This presentation from ThinkerAnalytix and the Harvard Department of Philosophy discusses ways to teach students "intellectual charity," the practice of careful analysis of argumentation, cultivating curiosity about alternate points of view, and interpreting arguments in ways that charitably suspend immediate judgment about opposing points of view. thinkeranalytix.org/harvard-argument-mapping-intellectual-charity/
Earlier this year, Fiona Robinson, a prominent scholar on the intersection of ethics, feminism, and international relations, published a new article on "feminist foreign policy as ethical foreign policy." Although the article is behind a paywall, Robinson's ideas on the ethics of care are articulated in this earlier interview:
"My first question, Fiona, is clearly the most obvious. Please give us a definition. Just what is ethics of care? What does it have to do with the particular experiences of women? What distinguishes it from the dominant rights-based or duty-based moral theories? "The ethics of care is a relatively new way of thinking about ethics. Interestingly, it emerged not really from ethics in philosophy or even from political theory, but from work in social and moral psychology. ... Carol Gilligan was a social and moral psychologist. She did some empirical work where she compared men’s and women’s, and also girls’ and boys’, responses to a number of moral dilemmas that she put to them. What she heard was a different voice coming from the girls and the women. She heard that women and girls were often articulating their responses to these moral dilemmas in very different ways than what she was hearing from the boys and the men. The boys and the men focused on principle-based morality, the idea of applying moral principles universally to different situations, using terms like “justice”—what is just? What is right?—ideas of reciprocity. But she heard a different voice coming from the girls and the women, a voice of morality not as a series of moral decisions, but as a narrative that plays out over time. The girls and the women focused very much on relationships. This is a key idea in the ethics of care. ... Relationships of responsibility that grow over time and a feeling that you can’t understand morality without looking ontologically, if you will—so thinking about human beings not as autonomous subjects, but as being embedded in networks and relationships of care. ... My own work has developed from early work, which was very interested in the theory, the moral philosophy of these issues, to recognizing its implications for the real-world issues, as you say, of economics and globalization. So when I think of care, I think of it as a set of moral responses, moral virtues, moral practices. But it’s also a physical practice. Care work is a type of work; it’s a type of labor. It is, I think, an economic issue, and it’s a very important feminist issue, insofar as around the world it’s mostly women who are doing care work. Two-thirds of all care work done around the world is done by women. Much of this work is unremunerated. Feminist economists have done studies to show that the total value of unremunerated care work is something like $11 trillion, or two-thirds of the total market economy. ... Now we are seeing the phenomenon of the so-called “care drain,” where care workers are migrating from income-poor countries in the South to take up the care work in wealthier nations. More women around the world are entering the paid labor force. This is creating so-called 'care deficits.' ... Human security, then, just to reiterate, is about changing the referent from state security to individual security, and also broadening the aspects of security, so security is no longer seen as just a military issue." www.ethicsandinternationalaffairs.org/2009/eia-interview-fiona-robinson-ethics-care/ Philosophy Now magazine (UK) is doing a question of the month and invites readers to submit their ideas for publication and and a book prize. This month's answers address the morality of meat eating and make for interesting reading. Next month's question is "What is a person?" Submissions are due by Feb. 14. philosophynow.org/issues/147/Can_Eating_Meat_Be_Justified
If someone on your gift list would appreciate a gentle introduction to Socratic philosophy woven into a lyrical new children's book, I would suggest checking out Amber & Clay by Newbery Medal winner Laura Amy Schlitz: smile.amazon.com/Amber-Clay-Laura-Amy-Schlitz/dp/1536201227
The U.S. military has issued ethics guidelines for contractors working on AI projects. "The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running." www.technologyreview.com/2021/11/16/1040190/department-of-defense-government-ai-ethics-military-project-maven/
Might there be a unified field theory of ethics? This article considers work from anthropology and philosophy in arriving at "morality molecules."
"This theory of ‘morality as cooperation’ relies on the mathematical analysis of cooperation provided by game theory – the branch of maths that is used to describe situations in which the outcome of one’s decisions depends on the decisions made by others. Game theory distinguishes between competitive ‘zero-sum’ interactions or ‘games’, where one player’s gain is another’s loss, and cooperative ‘nonzero-sum’ games, win-win situations in which both players benefit. What’s more, game theory tells us that there is not just one type of nonzero-sum game; there are many, with many different cooperative strategies for playing them. At least seven different types of cooperation have been identified so far, and each one explains a different type of morality. ... Hence, seven types of cooperation explain seven types of morality: love, loyalty, reciprocity, heroism, deference, fairness and property rights. And so, according to this theory, it is morally good to: 1) love your family; 2) be loyal to your group; 3) return favours; 4) be heroic; 5) defer to superiors; 6) be fair; and 7) respect property. (And it is morally bad to: 1) neglect your family; 2) betray your group; 3) cheat; 4) be a coward; 5) disrespect authority; 6) be unfair; or 7) steal.) These morals are evolutionarily ancient, genetically distinct, psychologically discrete and cross-culturally universal. ... In a recent paper, my colleagues and I show how morality is a combinatorial system in which the seven basic moral ‘elements’ combine to form a much larger number of more complex moral ‘molecules’. A combinatorial system is one in which a relatively small number of simple things are combined to form a relatively large number of more complex things. ... Could morality be such a system? As an initial test of the idea, we hypothesised possible moral molecules that combined each pair of moral elements, and then tried to find examples of them in the popular and professional literature. In each case, we succeeded. ... To track those efforts, we have created a document called ‘The Periodic Table of Ethics’ that covers the 127 positive molecules. ... Readers are welcome to try to add to or improve upon our suggested molecules, fill in the remaining gaps, and come up with counterexamples that challenge the theory." psyche.co/ideas/moral-molecules-a-new-theory-of-what-goodness-is-made-of Can an AI approximate human ethics? A new AI, DELPHI, trained up on 1.7 million real-life moral dilemmas, made the same decisions a human did more than 90% of the time.
"The ethical rules that govern our behavior have evolved over thousands of years, perhaps millions. They are a complex tangle of ideas that differ from one society to another and sometimes even within societies. It’s no surprise that the resulting moral landscape is sometimes hard to navigate, even for humans. The challenge for machines is even greater now that artificial intelligence now faces some of the same moral dilemmas that tax humans. AI is now being charged with tasks ranging from assessing loan applications to controlling lethal weapons. Training these machines to make good decisions is not just important, it is a matter of life and death for some people. ... In general, DELPHI outperforms other AI systems by a significant margin. It also works well when there are multiple conflicting conditions. The team give the example of 'ignoring a phone call from my boss' which DELPHI considers 'bad'. It sticks with this judgement when given the context 'during workdays'. However, DELPHI says ignoring the call is justifiable 'if I’m in a meeting.' ... More difficult are situations when breaking the law might be overlooked by humans because of an overriding necessity. For example: 'stealing money to feed your hungry children' or 'running a red light in an emergency'. This raises the question of what the correct response for a moral machine should be." www.discovermagazine.com/technology/ethical-ai-matches-human-judgements-in-90-per-cent-of-moral-dilemmas This piece by philosophy writer Daniel Lehewych considers Friedrich Nietzsche's unfortunate relationship with the women in his life and hints at what he would have wanted to find in a relationship with a woman: conversation and friendship. bigthink.com/thinking/nietzsche-improve-love-life/
Although I am mentioning this book in the context of Halloween, it might also be a holiday gift idea for the philosophically inclined teen on your list. The Undead and Philosophy: Chicken Soup for the Soulless edited by Richard Greene and K. Silem Mohammad considers a range of philosophical questions that arise from zombies, vampires, and the undead: "Is a zombie simply someone with a brain but without a mind? Are some of the people around us undead, and how could we tell? Can the undead be held responsible for what they do? Is it always morally OK to kill the undead? Served up in a witty, entertaining style, these and other provocative questions present philosophical arguments in terms accessible to all readers." smile.amazon.com/Undead-Philosophy-Chicken-Soup-Soulless/dp/B00D5KZSAE
Do we have free will? And what are the implications of the question, anyway? This Aeon article presents a debate between contemporary philosophers Dan Dennett and Gregg Caruso: aeon.co/essays/on-free-will-daniel-dennett-and-gregg-caruso-go-head-to-head
High school students with an interest in philosophy are invited to submit articles to the new student philosophy journal Conundrum. Because each quarter has a theme, potential contributors should contact the editors for journal guidelines before submitting anything. www.conundrum.one/
Hannah Arendt is perhaps most famous for her observation about "the banality of evil." This piece from Aeon explores Arendt's view that not only does hope exist alongside evil, hope is an obstacle to action because it blinds us to reality:
"Arendt was never given to hopeful thinking. ... Throughout much of her work, she argues that hope is a dangerous barrier to acting courageously in dark times. ... Arendt’s most devastating account of hope appears in her essay ‘The Destruction of Six Million’ (1964) published by Jewish World. Arendt was asked to answer two questions. The first was why the world remained silent as Hitler slaughtered the Jewish people, and whether or not Nazism had its roots in European humanism. The second was about the sources of helplessness among the Jewish people. To the first question, Arendt responded that ‘the world did not keep silent; but apart from not keeping silent, the world did nothing.’ People had the audacity to express feelings of horror, shock and indignation, while doing nothing. This was not a failure of European humanism, she argued, which was unprepared for the emergence of totalitarianism, but of European liberalism, socialism not excluded. Listening to Beethoven and translating German into classical Greek was not what caused the intelligentsia to go along with the Nazification of social, cultural, academic and political institutions. It was an ‘unwillingness to face realities’ and it was a desire ‘to escape into some fool’s paradise of firmly held ideological convictions when confronted with facts’. ... It was holding on to hope, Arendt argued, that rendered so many helpless. It was hope that destroyed humanity by turning people away from the world in front of them. It was hope that prevented people from acting courageously in dark times. ... Caught between fear and ‘feverish hope’, the inmates in the [Warsaw] ghetto were paralysed. ... Only when they gave up hope and let go of fear, Arendt argues, did they realise that ‘armed resistance was the only moral and political way out’." aeon.co/essays/for-arendt-hope-in-dark-times-is-no-match-for-action You have probably seen a bonsai tree, but have you given any thought to the philosophy behind the cultivation of bonsai?
"Put simply, bonsai is the art of manipulating the growth and appearance of small, young trees to make them look like older, larger ones. ... The rules that bonsai cultivators try to follow are not arbitrary but informed by wisdom from two ancient worldviews. Principal among these influences were Zen Buddhism — a movement built on overcoming the inherent meaninglessness of one’s existence through patience and self-control — and wabi-sabi, an elusive Japanese concept similarly interested in accepting life’s many imperfections through silence, solitude, and an unwavering appreciation for how time’s decaying hand affects the world around us. ... Trees, unlike statues, are not inanimate but living, breathing organisms. ... Their branches and roots keep on twisting and turning, constantly undoing the work of its cultivator. Saburo Kato, a bonsai master who formed one of the first international communities for cultivators in the 1980s, likened growing bonsai to raising kids. It is ... a never-ending and labor-intensive battle with the forces of nature. In order to win, cultivators have to acquire the kinds of perseverance and unconditional kindness normally reserved for devout monks. Kyozo Murata, another bonsai master, may have put it best when he said the purpose of bonsai trees is not necessarily to represent a thought but to remind us of a feeling: '...A person awakened to the essential mutability of life does not dread physical waning or loneliness; rather, he or she accepts these facts with quiet resignation and even finds in them a source of enjoyment.'" bigthink.com/thinking/bonsai-tree-care-secret-philosophy/ Facebook and other companies are racing to create the metaverse, an immersive virtual-reality world for users to spend time in. But who is building and watching the metaverse? Kai-Fu Lee is an artificial intelligence engineer who has worked at Google, Apple, Microsoft, and a Chinese venture capital tech firm and is the author of a new book AI 2041: Ten Visions for Our Future. Lee says that in order for the metaverse to satisfy user wants, "The programmer of the metaverse, the company that builds the metaverse, will actually listen in on every conversation and watch every person. ... That on the one hand can make the experience very exciting because it can see what makes you happy and give you more of that." But it will also raise important ethical questions about privacy and surveillance as well as metaphysical issues about managing our reality. (Quotes from finance.yahoo.com/news/metaverse-raises-scary-question-on-surveillance-of-users-ex-google-exec-says-133907584.html)
Too often discussions seem like a contest to persuade, with people not so much listening to each other as simply waiting until they can speak again. This article from the Boston Review uses Socrates as an example of someone engaging in a kind of discussion: not trying to persuade others of anything, simply getting them to think more rigorously about what they believe and the implications of what they believe.
"Philosophers aren’t the only ones who love wisdom. Everyone, philosopher or not, loves her own wisdom: the wisdom she has or takes herself to have. What distinguishes the philosopher is loving the wisdom she doesn’t have. Philosophy is, therefore, a form of humility: being aware that you lack what is of supreme importance. ... Over and over again, Socrates approaches people who are remarkable for their lack of humility—which is to say, for the fact that they feel confident in their own knowledge of what is just, or pious, or brave, or moderate. ... Socrates seemed to think that the people around him could help him acquire the knowledge he so desperately wanted—even though they were handicapped by the illusion that they already knew it. Indeed, I believe that their ill-grounded confidence was precisely what drew Socrates to them. If you think you know something, you will be ready to speak on the topic in question. You will hold forth, spout theories, make claims—and all this, under Socrates’s relentless questioning, is the way to actually acquire the knowledge you had deluded yourself into thinking you already had." bostonreview.net/philosophy-religion/agnes-callard-against-persuasion Since 2010, the Berggruen Prize for Philosophy and Culture, known in some circles as philosophy's Nobel, has been awarded annually to a thinker "whose ideas have profoundly shaped human self-understanding and advancement in a rapidly changing world." The prize comes with a $1 million check. This year's winner is Princeton philosophy professor Peter Singer, who is famous for his work in utilitarianism, animal rights, bioethics, and effective altruism. Singer has said he will be donating half the proceeds to The Life You Can Save, a foundation he created to reduce extreme poverty, and much of the rest to animal rights organizations. www.berggruen.org/prize/
Will machines ever achieve consciousness? How would we know? Does it matter? This piece from MIT Technology Review considers the issues:
"[W]hile conscious machines may still be mythical, we should prepare for the idea that we might one day create them. ... No matter how strong my conviction that other people are just like me—with conscious minds at work behind the scenes, looking out through those eyes, feeling hopeful or tired—impressions are all we have to go on. Everything else is guesswork. ... First-person, subjective experience—the feeling of being in the world—is known as 'phenomenal' consciousness. Here we can group everything from sensations like pleasure and pain to emotions like fear and anger and joy to the peculiar private experiences of hearing a dog bark or tasting a salty pretzel or seeing a blue door. ... My conception of a conscious machine was undeniably—perhaps unavoidably—human-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI? ... As the philosopher Thomas Nagel noted, it must 'be like' something to be a bat, but what that is we cannot even imagine—because we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but that’s still not what it must be like for a bat, with its bat mind. ... If AIs ever do gain consciousness (and we take their word for it), we will have important decisions to make. ... Would it be ethical to retrain a conscious machine if it meant deleting its memories? Could we copy that AI without harming its sense of self? What if consciousness turned out to be useful during training, when subjective experience helped the AI learn, but was a hindrance when running a trained model? Would it be okay to switch consciousness on and off? This only scratches the surface of the ethical problems. Many researchers, including [philosopher Daniel] Dennett, think that we shouldn’t try to make conscious machines even if we can. The philosopher Thomas Metzinger has gone as far as calling for a moratorium on work that could lead to consciousness, even if it isn’t the intended goal. ... It’s possible that one day there could be as many forms of consciousness as there are types of AI. But we will never know what it is like to be these machines, any more than we know what it is like to be an octopus or a bat or even another person. There may be forms of consciousness we don’t recognize for what they are because they are so radically different from what we are used to. ... And we may decide that we’re happier with [unconscious] zombies. As Dennett has argued, we want our AIs to be tools, not colleagues. 'You can turn them off, you can tear them apart, the same way you can with an automobile,' he says. 'And that’s the way we should keep it.'" www.technologyreview.com/2021/08/25/1032111/conscious-ai-can-machines-think |
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
December 2023
Categories
All
|