Learning to lie is considered an important milestone in child development. Voluntarily restricting one's own lying is considered an important milestone in moral development. Both of those make the recent news about OpenAI's new GPT-4 lying to trick a human into completing a task it could not -- a task designed to block a machine from proceeding -- rich fodder for philosophical discussion. Is this a sign of increasing machine intelligence (and is that good or bad)? How does one embed moral code (and whose moral code) in machine learning? Should this line of experimentation proceed -- and is it even realistic to suggest a halt at this point? www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016
Would you want to know which day you were going to die? That's the premise of a new science fiction book The Measure in which people all over the world are delivered a box that contains information about how much longer they have left to live. (Would you open the box?) It's also the premise of the website www.death-clock.org. (Will you click on that link?) Science is still not particularly good at making these estimates, which might give us some psychological wiggle room regardless of what Death Clock comes up with, but what will happen as medical estimates improve? Do you want to know when you will die? If you do, would you want the day or just a range? How might the information change the way you live?
Are humans worth it? The human population of earth has doubled in the last 50 years to 8 billion while wildlife populations have declined 70%. Les Knight argues that humans should voluntarily work towards their own extinction. (This is not an isolated idea: humans working toward the extinction of the species have also popped up as a subplot in the last two science fiction books I have read.) www.nytimes.com/2022/11/23/climate/voluntary-human-extinction.html
The summer's drought and war have focused attention on agricultural vulnerabilities and food insecurity. This article from The New York Times profiles the work of crop scientists who are tinkering with plant genomes to improve harvests: www.nytimes.com/2022/08/18/climate/gmo-food-soybean-photosynthesis.html
Following on its work with mouse stem cells, an Israeli biotech company is planning to start work creating embryos from human stem cells. Created without egg or sperm, the embryos are incubated in artificial wombs. The vision of the company is to use these "organized embryo entities" for possible organ and tissue transplant. Scientists can already use stem cells to create some tissues in vitro, but an embryo can make more complex organs, organs that would be resistant to rejection because they would be genetically identical to the intended recipient. Interesting science aside, this project creates a host of philosophical issues, from "what is a human?" and "what is life?" to "what are individuals allowed to do with their own cells?" and "are organs a crop that can be grown and harvested like any other?" www.technologyreview.com/2022/08/04/1056633/startup-wants-copy-you-embryo-organ-harvesting/
Reuters (UK) is reporting that Amazon is working on technology that would allow Alexa to mimic anyone's voice based on an audio sample of a minute or less. This technology is being pitched as a way to capture loved one's voices but, like deep-fake videos, raises epistemological issues ("what do we actually know when our usual sensory input can be deceived?") as well as privacy issues concerning ownership of our images and voices. www.reuters.com/technology/amazon-has-plan-make-alexa-mimic-anyones-voice-2022-06-22
When lives are on the line, who should be making decisions: artificial intelligence, with its lightning-fast ability to weigh options, or humans? This question is ever-less theoretical, with AI being built into health care systems, criminal justice systems, and, increasingly, weapons systems. The Pentagon's Defense Advanced Research Projects Agency (DARPA), for example, recently launched its "In the Moment" program, designed to develop defense technology that pairs AI with expert systems to "build trusted algorithmic decision-makers for mission-critical Department of Defense (DoD) operations." www.washingtonpost.com/technology/2022/03/29/darpa-artificial-intelligence-battlefield-medical-decisions/ (Quote from www.darpa.mil/news-events/2022-03-03.)
Because I teach science fiction too, I always encourage my philosophy students to consider the darker applications of philosophical thought experiments -- like the brain in the vat -- as well. This proposed alternative to capital punishment is a bold, if creepy, application of philosophy's brain in the vat.
"Many people born into liberal democracies find corporal or capital punishment distasteful. We live in an age which says there are only three humane, acceptable ways to punish someone: give them a fine, force them to do “community service,” or lock them up. But why do we need to accept such a small, restrictive range of options? Perhaps, as Christopher Belshaw argues in the Journal of Controversial Ideas, it’s time to consider some radical alternatives. To punish someone is to do them harm, and sometimes, great harm indeed. As Belshaw writes, it’s to “harm them in such a way that they understand harm is being done in return for what, at least allegedly, they did.” Justice assumes some kind of connection between a crime and the punishment, or between the victim and the criminal. This makes punishment, in the main, retributive — a kind of payback for a wrong that someone has committed. ... Belshaw’s article hinges on the idea that the prison system is not fit for purpose. First, there’s the question of whether prison actually harms a criminal in the way we want. In some cases, it might succeed only in “rendering them for a period inoperative.” ... Second, and on the other hand, a bad prison sentence might cause more harm than is strictly proportional. A convict might suffer unforeseen abuse at the hands of guards or other inmates. ... Third, and especially concerning decades-long sentences, there’s a question about who prison is punishing. ... When we punish an old, memory-addled person convicted 40 years previously, are we really punishing the same person? ... Well, one option is to put criminals into a deep and reversible coma. One of the biggest problems with capital punishment is that it is irreversible. So long as there’s even a single case of a mistaken conviction, wrongfully killing someone is an egregious miscarriage of justice. But what if the criminal could always be brought back to consciousness? ... Putting someone in a coma essentially “freezes” a person’s identity. They wake up with much the same mental life as they did when they went into a coma. As such, it avoids the issues of punishing a changing person, decades later. A convict will wake up, years off their life, but can still appreciate the connection between the punishment and the crime they committed.But the biggest advantage a reversible coma has over prison, is that it’s standardized form of punishment. It’s a clear measurement of harm (i.e. a denial of x amount of years from your life) and is not open to the variables of greater and lesser harm in a prison environment. Essentially, putting prisoners in a coma establishes “years of life” as an acceptable and measurable payment for a wrong done. ... Even if you find the idea of induced comas as unspeakably horrible, Belshaw does at least leave us with a good question. Why do we assume that only one kind of punishment is the best? With science, technology, and societal values moving on all the time, might it be time to reconsider and re-examine how we ensure justice?"
As noted in this recent article from The New York Times, "with advances in artificial intelligence and robotics allowing for more profound interactions with the inanimate" the number of people who form deep attachments to artificial "lifeforms" is likely to increase over the coming years. In Japan, thousands of people have entered into unofficial marriages with fictional characters. Are the feelings of these "fictosexuals," as those in the movement call themselves, any less real? Do these relationships serve a social need? And what are the ethical obligations, if any, of the corporate entities that own and promote (and program and update and potentially discontinue) these AI-powered objects of devotion? www.nytimes.com/2022/04/24/business/akihiko-kondo-fictional-character-relationships.html
A recent Pew Research Center study asked Americans what they think of artificial intelligence and "human enhancement." The survey asked about "six developments that are widely discussed among futurists, ethicists and policy advocates. Three are part of the burgeoning array of AI applications: the use of facial recognition technology by police, the use of algorithms by social media companies to find false information on their sites and the development of driverless passenger vehicles. The other three, often described as types of human enhancements, revolve around developments tied to the convergence of AI, biotechnology, nanotechnology and other fields. They raise the possibility of dramatic changes to human abilities in the future: computer chip implants in the brain to advance people’s cognitive skills, gene editing to greatly reduce a baby’s risk of developing serious diseases or health conditions, and robotic exoskeletons with a built-in AI system to greatly increase strength for lifting in manual labor jobs." The results are summarized here: www.pewresearch.org/internet/wp-content/uploads/sites/9/2022/03/PS_2022.03.17_ai-he_00-01.png (from www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns)
Is it okay to be mean to someone if the other "person" isn't human?
"The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endless series of provocative headlines. Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online. ... Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships. ...
"Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms. ... In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once. On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic. ... “There are a lot of studies being done… about how a lot of these chatbots are female and [have] feminine voices, feminine names,” [AI ethicist and consultant Olivia] Gambelin said. Some academic work has noted how passive, female-coded bot responses encourage misogynistic or verbally abusive users. ...
"But what to think of the people that brutalize these innocent bits of code? For now, not much. As AI continues to lack sentience, the most tangible harm being done is to human sensibilities. But there’s no doubt that chatbot abuse means something. ... And although humans don’t need to worry about robots taking revenge just yet, it’s worth wondering why mistreating them is already so prevalent."
OUTSIDE THE BOX:
The Smithsonian's Arts and Industries Building has re-opened after a long renovation and has a new exhibit: FUTURES. "Part exhibition, part festival, FUTURES presents nearly 32,000 square feet of new immersive site-specific art installations, interactives, working experiments, inventions, speculative designs, and 'artifacts of the future,' as well as historic objects and discoveries from 23 of the Smithsonian’s museums, major initiatives, and research centers." FUTURES is only open through July 6, 2022. aib.si.edu/futures
Facebook and other companies are racing to create the metaverse, an immersive virtual-reality world for users to spend time in. But who is building and watching the metaverse? Kai-Fu Lee is an artificial intelligence engineer who has worked at Google, Apple, Microsoft, and a Chinese venture capital tech firm and is the author of a new book AI 2041: Ten Visions for Our Future. Lee says that in order for the metaverse to satisfy user wants, "The programmer of the metaverse, the company that builds the metaverse, will actually listen in on every conversation and watch every person. ... That on the one hand can make the experience very exciting because it can see what makes you happy and give you more of that." But it will also raise important ethical questions about privacy and surveillance as well as metaphysical issues about managing our reality. (Quotes from finance.yahoo.com/news/metaverse-raises-scary-question-on-surveillance-of-users-ex-google-exec-says-133907584.html)
This article from Philosophy Now (UK) explores the link between philosophy and science fiction, arguing through a series of case studies that science fiction's creation of nonhuman minds -- be they robots, aliens, AI, or animals -- is a means of thinking through what it means to be human in the first place.
"At a deeper level any science fiction film is an allegory of the human condition. Accordingly, sci-fi representations of non-humans are molded to serve as a mirror or a contradiction for us. They throw back at us our own existential anxiety, frailties and limitations, as well as our strengths and beauty, but far more than that, our unconscious need to define the meaning of existence. They confront us with intense questions: Who are we? Is there anything special about us? Do we play a unique role in the scheme of things? As a central aspect of the absurdity of our existence (which has been captured so well in existentialism), the human species seems to stand alone in the universe. We meet no other species which can compete with or challenge us. Confronting humans who are accustomed to thinking in anthropocentric ways with ‘competitive species’ can provoke in us the need to seek distinctions and at least somewhat answer fundamental questions about our identity, role, and significance within a vast, empty universe."
The pithy comment on scientific ethics from Dr. Ian Malcolm (Jeff Goldblum's character) in Jurassic Park -- "Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should" -- seems to be echoed by Australia-based ethicist Kobi Leins in this article from Science News on the creation of self-organizing xenobots, living machines derived from frog cells: “Scientists like to make things, and don’t necessarily think about what the repercussions are."
"Using blobs of skin cells from frog embryos, scientists have grown creatures unlike anything else on Earth, a new study reports. ... Separated from their usual spots in a growing frog embryo, the cells organized themselves into balls and grew. About three days later, the clusters, called xenobots, began to swim. ... Xenobots have no nerve cells and no brains. Yet xenobots — each about half a millimeter wide — can swim through very thin tubes and traverse curvy mazes. When put into an arena littered with small particles of iron oxide, the xenobots can sweep the debris into piles. Xenobots can even heal themselves; after being cut, the bots zipper themselves back into their spherical shapes. ... The small xenobots are fascinating in their own rights, [Tufts biologist Michael Levin] says, but they raise bigger questions, and bigger possibilities. 'It’s finding a whole galaxy of weird new things.'"
Advances in computer-brain interfaces -- of which Elon Musk's Neuralink might be the most prominent example -- raise important ethical issues about their use and about who decides, as this article from Science News points out.
"Today, paralyzed people are already testing brain-computer interfaces, a technology that connects brains to the digital world. With brain signals alone, users have been able to shop online, communicate and even use a prosthetic arm to sip from a cup. The ability to hear neural chatter, understand it and perhaps even modify it could change and improve people’s lives in ways that go well beyond medical treatments. But these abilities also raise questions about who gets access to our brains and for what purposes. Because of neurotechnology’s potential for both good and bad, we all have a stake in shaping how it’s created and, ultimately, how it is used. But most people don’t have the chance to weigh in, and only find out about these advances after they’re a fait accompli. So we asked Science News readers their views about recent neurotechnology advances. We described three main ethical issues — fairness, autonomy and privacy. Far and away, readers were most concerned about privacy. "The idea of allowing companies, or governments, or even health care workers access to the brain’s inner workings spooked many respondents. Such an intrusion would be the most important breach in a world where privacy is already rare. 'My brain is the only place I know is truly my own,' one reader wrote. Technology that can change your brain — nudge it to think or behave in certain ways — is especially worrisome to many of our readers. ...
"'We are getting very, very close' to having the ability to pull private information from people’s brains, [Columbia University neurobiologist Rafael] Yuste says, pointing to studies that have decoded what a person is looking at and what words they hear. Scientists from Kernel, a neurotech company near Los Angeles, have invented a helmet, just now hitting the market, that is essentially a portable brain scanner that can pick up activity in certain brain areas. ... Technology that can change the brain’s activity already exists today, as medical treatments. These tools can detect and stave off a seizure in a person with epilepsy, for instance, or stop a tremor before it takes hold. ... But the power to precisely change a functioning brain directly — and as a result, a person’s behavior — raises worrisome questions. ... Precise brain control of people is not possible with existing technology. But in a hint of what may be possible, scientists have already created visions inside mouse brains. Using a technique called optogenetics to stimulate small groups of nerve cells, researchers made mice 'see' lines that weren’t there. Those mice behaved exactly as if their eyes had actually seen the lines, says Yuste, whose research group performed some of these experiments. ...
"People ought to have the choice to sell or give away their brain data for a product they like, or even for straight up cash [according to Zurich-based bioethicist Marcello Ienca]. 'The human brain is becoming a new asset,' Ienca says, something that can generate profit for companies eager to mine the data. He calls it 'neurocapitalism.' ...
"A lack of ethical clarity is unlikely to slow the pace of the coming neurotech rush. But thoughtful consideration of the ethics could help shape the trajectory of what’s to come, and help protect what makes us most human."
MAPS IN THE NEWS:
Elections and wars are inflection points rich in counterfactuals. The series of maps in this BBC Future article considers alternate histories, which have become a field of serious historical scholarship: what if the U.S. had lost the American Revolution? what if WWII hadn't been fought? what if the states of the United States had splintered in line with some actual proposals? www.bbc.com/future/article/20201104-the-intriguing-maps-that-reveal-alternate-histories
What does it mean to create literature? Is it an exclusively human art? In 2017, for instance, a fiction author partnered with an artificial intelligence algorithm to write a science fiction short story (https://www.wired.com/2017/12/when-an-algorithm-helps-write-science-fiction/). Now The Wall Street Journal has reported that a new AI system known as GPT-3 can write memos, produce business ideas, write letters and short stories in the style of famous people, and "generate news articles that readers may have trouble distinguishing from human-written ones," surprising even its creators.
As humans, especially in the workplace, shift from author to editor (is that a positive development?), GPT-3 and similar programs in the works raise important questions about the future role of humans in the production of the written word, including of what we think of as "literature." (Information about GPT-3 from www.wsj.com/articles/an-ai-breaks-the-writing-barrier-11598068862)
This article from MIT Technology Review features an interview with Jess Whittlestone at the University of Cambridge's Leverhulme Centre for the Future of [Artificial] Intelligence. Whittlestone is discussing the need for what she and colleagues are referring to as "ethics with urgency" for AI.
"With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly. ... Compared to something like biomedical ethics, the ethics we have for AI isn’t very practical. It focuses too much on high-level principles. We can all agree that AI should be used for good. But what does that really mean? And what happens when high-level principles come into conflict? For example, AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.
"AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions. ... We need to think about ethics differently. It shouldn’t be something that happens on the side or afterwards—something that slows you down. It should simply be part of how we build these systems in the first place: ethics by design. ... What we’re saying is that machine-learning researchers and engineers need to be trained to think through the implications of what they’re building, whether they’re doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise? ... NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work. ... What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. That’s why it’s important for these different groups to get used to working together."
From an article in Army Times: "Ear, eye, brain and muscular enhancement is “technically feasible by 2050 or earlier,” according to a study released this month by the U.S. Army’s Combat Capabilities Development Command. ... The report — entitled “Cyborg Soldier 2050: Human/Machine Fusion and the Implications for the Future of the DOD” — is the result of a year-long assessment. It was written by a study group from the DoD Biotechnologies for Health and Human Performance Council, which is tasked to look at the ripple effects of military biotechnology. The team identified four capabilities as technically feasible by 2050:
GEOGRAPHY IN THE NEWS:
A study of biogeography shows that life exists in a "just right" mix of physical geographic variables -- temperature, precipitation, sunlight, altitude... -- that varies from species to species. For this reason, some astrobiologists are focusing their attention on "eyeball" planets, those that have a transition zone between the too-hot day side and the too-cold night side, a transition zone that might include a "just right" spot for extraterrestrial life: nautil.us/blog/-forget-earth_likewell-first-find-aliens-on-eyeball-planets
"GLOBAL ISSUES, LEADERSHIP CHOICES":
The chatter about the need for humanity to become, in the words of Stephen Hawking, "a multi-planetary species" is getting louder. However, this article from the science magazine Nautilus argues that instead of saving humanity, space colonization could accelerate its destruction. "The argument is based on ideas from evolutionary biology and international relations theory... Consider what is likely to happen as humanity hops from Earth to Mars, and from Mars to relatively nearby, potentially habitable exoplanets like Epsilon Eridani b, Gliese 674 b, and Gliese 581 d. Each of these planets has its own unique environments that will drive Darwinian evolution, resulting in the emergence of novel species over time, just as species that migrate to a new island will evolve different traits than their parent species. The same applies to the artificial environments of spacecraft like “O’Neill Cylinders,” which are large cylindrical structures that rotate to produce artificial gravity. Insofar as future beings satisfy the basic conditions of evolution by natural selection—such as differential reproduction, heritability, and variation of traits across the population—then evolutionary pressures will yield new forms of life. But the process of “cyborgization”—that is, of using technology to modify and enhance our bodies and brains—is much more likely to influence the evolutionary trajectories of future populations living on exoplanets or in spacecraft. The result could be beings with completely novel cognitive architectures (or mental abilities), emotional repertoires, physical capabilities, lifespans, and so on. In other words, natural selection and cyborgization as humanity spreads throughout the cosmos will result in species diversification. At the same time, expanding across space will also result in ideological diversification. Space-hopping populations will create their own cultures, languages, governments, political institutions, religions, technologies, rituals, norms, worldviews, and so on. As a result, different species will find it increasingly difficult over time to understand each other’s motivations, intentions, behaviors, decisions, and so on. It could even make communication between species with alien languages almost impossible. ... Thus, as I write in the paper, phylogenetic and ideological diversification will engender a situation in which many species will be “not merely aliens to each other but, more significantly, alienated from each other.” ... [E]xtreme differences like those just listed will undercut trust between species."
MAPS IN THE NEWS:
In 1979, 10 years after the first moon landing, the United Nations adopted the "Agreement Governing the Activities of States on the Moon and Other Celestial Bodies," also known as the Moon Treaty. The treaty was designed to prohibit the militarization, commercialization, and colonization of bodies in our solar system, including the moon, without the consent of the international community. This map looks at the handful of states that are parties to the treaty (in green) or that have signed the treaty but are not necessarily bound by it (in blue). www.statista.com/chart/18738/countries-that-are-signatories-or-parties-to-the-1979-moon-treaty/
"In May, a group of international scientists assembled near Washington, D.C., to tackle an alarming problem: what to do about an asteroid hurtling toward Earth. ... True, the chances of a civilization-destroying asteroid impact are exceedingly small, at least in the foreseeable future. Asteroid strikes that cause regional devastation and catastrophic global climate change occur, on average, only about once every 100,000 years or more. ... Over the past two decades, asteroid hunters with NASA and other international space agencies have identified and tracked the orbits of more than 20,000 asteroids—also known as near-Earth objects—that pass through our neighborhood as they orbit the sun. Of those, about 2,000 are classified as potentially hazardous—asteroids that are large enough (greater than 150 yards in diameter) to cause local destruction and that come close enough to Earth to someday pose a threat. ... On an unlucky Friday the 13th in April 2029, the thousand-foot-wide asteroid Apophis will pass a mere 19,000 miles from Earth—closer than the satellites that bring us DISH TV. But here’s the bad news: Hundreds of thousands of other near-Earth asteroids, both large and small, haven’t been identified. We have no idea where they are and where they are going. On Feb. 15, 2013, a relatively small, 60-foot-wide asteroid traveling at 43,000 mph exploded in the atmosphere near the Russian city of Chelyabinsk, sending out a blast wave that injured 1,500 people. No one had seen the asteroid coming. ... Nor is it clear that we could deflect a small but dangerous asteroid heading our way even if we did spot it. No asteroid-deflection method has ever been tested in real-space conditions.... Over its 4.5 billion-year history, Earth has been hit millions of times by powerful asteroids, and it will inevitably be hit again—whether two centuries from now or next Tuesday. So it isn’t a question of whether humankind will have to confront the prospect of a destructive asteroid hurtling our way; it is only a question of when." www.wsj.com/articles/the-asteroid-peril-isnt-science-fiction-11562339356
Should artificial intelligence have rights? This article by a pair of U.S. philosophy professors suggests that the issue has important parallels in scientific ethics.
"Universities across the world are conducting major research on artificial intelligence (AI), as are organisations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AIs might deserve the ethical protections we typically give to animals.
"Discussions of ‘AI rights’ or ‘robot rights’ have so far been dominated by questions of what ethical obligations we would have to an AI of humanlike or superior intelligence – such as the android Data from Star Trek or Dolores from Westworld. But to think this way is to start in the wrong place, and it could have grave moral consequences. Before we create an AI with humanlike sophistication deserving humanlike ethical consideration, we will very likely create an AI with less-than-human sophistication, deserving some less-than-human ethical consideration. We are already very cautious in how we do research that uses certain nonhuman animals. Animal care and use committees evaluate research proposals to ensure that vertebrate animals are not needlessly killed or made to suffer unduly. ... Biomedical research is carefully scrutinised, but AI research, which might entail some of the same ethical risks, is not currently scrutinised at all. Perhaps it should be.
"Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them. ... In the case of research on animals and even on human subjects, appropriate protections were established only after serious ethical transgressions came to light (for example, in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study). With AI, we have a chance to do better.
"We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research."
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: