Facebook and other companies are racing to create the metaverse, an immersive virtual-reality world for users to spend time in. But who is building and watching the metaverse? Kai-Fu Lee is an artificial intelligence engineer who has worked at Google, Apple, Microsoft, and a Chinese venture capital tech firm and is the author of a new book AI 2041: Ten Visions for Our Future. Lee says that in order for the metaverse to satisfy user wants, "The programmer of the metaverse, the company that builds the metaverse, will actually listen in on every conversation and watch every person. ... That on the one hand can make the experience very exciting because it can see what makes you happy and give you more of that." But it will also raise important ethical questions about privacy and surveillance as well as metaphysical issues about managing our reality. (Quotes from finance.yahoo.com/news/metaverse-raises-scary-question-on-surveillance-of-users-ex-google-exec-says-133907584.html)
0 Comments
This article from Philosophy Now (UK) explores the link between philosophy and science fiction, arguing through a series of case studies that science fiction's creation of nonhuman minds -- be they robots, aliens, AI, or animals -- is a means of thinking through what it means to be human in the first place.
"At a deeper level any science fiction film is an allegory of the human condition. Accordingly, sci-fi representations of non-humans are molded to serve as a mirror or a contradiction for us. They throw back at us our own existential anxiety, frailties and limitations, as well as our strengths and beauty, but far more than that, our unconscious need to define the meaning of existence. They confront us with intense questions: Who are we? Is there anything special about us? Do we play a unique role in the scheme of things? As a central aspect of the absurdity of our existence (which has been captured so well in existentialism), the human species seems to stand alone in the universe. We meet no other species which can compete with or challenge us. Confronting humans who are accustomed to thinking in anthropocentric ways with ‘competitive species’ can provoke in us the need to seek distinctions and at least somewhat answer fundamental questions about our identity, role, and significance within a vast, empty universe." philosophynow.org/issues/143/Sci_Fi_and_The_Meaning_of_Life The pithy comment on scientific ethics from Dr. Ian Malcolm (Jeff Goldblum's character) in Jurassic Park -- "Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should" -- seems to be echoed by Australia-based ethicist Kobi Leins in this article from Science News on the creation of self-organizing xenobots, living machines derived from frog cells: “Scientists like to make things, and don’t necessarily think about what the repercussions are."
"Using blobs of skin cells from frog embryos, scientists have grown creatures unlike anything else on Earth, a new study reports. ... Separated from their usual spots in a growing frog embryo, the cells organized themselves into balls and grew. About three days later, the clusters, called xenobots, began to swim. ... Xenobots have no nerve cells and no brains. Yet xenobots — each about half a millimeter wide — can swim through very thin tubes and traverse curvy mazes. When put into an arena littered with small particles of iron oxide, the xenobots can sweep the debris into piles. Xenobots can even heal themselves; after being cut, the bots zipper themselves back into their spherical shapes. ... The small xenobots are fascinating in their own rights, [Tufts biologist Michael Levin] says, but they raise bigger questions, and bigger possibilities. 'It’s finding a whole galaxy of weird new things.'" www.sciencenews.org/article/frog-skin-cells-self-made-living-machines-xenobots Advances in computer-brain interfaces -- of which Elon Musk's Neuralink might be the most prominent example -- raise important ethical issues about their use and about who decides, as this article from Science News points out.
"Today, paralyzed people are already testing brain-computer interfaces, a technology that connects brains to the digital world. With brain signals alone, users have been able to shop online, communicate and even use a prosthetic arm to sip from a cup. The ability to hear neural chatter, understand it and perhaps even modify it could change and improve people’s lives in ways that go well beyond medical treatments. But these abilities also raise questions about who gets access to our brains and for what purposes. Because of neurotechnology’s potential for both good and bad, we all have a stake in shaping how it’s created and, ultimately, how it is used. But most people don’t have the chance to weigh in, and only find out about these advances after they’re a fait accompli. So we asked Science News readers their views about recent neurotechnology advances. We described three main ethical issues — fairness, autonomy and privacy. Far and away, readers were most concerned about privacy. "The idea of allowing companies, or governments, or even health care workers access to the brain’s inner workings spooked many respondents. Such an intrusion would be the most important breach in a world where privacy is already rare. 'My brain is the only place I know is truly my own,' one reader wrote. Technology that can change your brain — nudge it to think or behave in certain ways — is especially worrisome to many of our readers. ... "'We are getting very, very close' to having the ability to pull private information from people’s brains, [Columbia University neurobiologist Rafael] Yuste says, pointing to studies that have decoded what a person is looking at and what words they hear. Scientists from Kernel, a neurotech company near Los Angeles, have invented a helmet, just now hitting the market, that is essentially a portable brain scanner that can pick up activity in certain brain areas. ... Technology that can change the brain’s activity already exists today, as medical treatments. These tools can detect and stave off a seizure in a person with epilepsy, for instance, or stop a tremor before it takes hold. ... But the power to precisely change a functioning brain directly — and as a result, a person’s behavior — raises worrisome questions. ... Precise brain control of people is not possible with existing technology. But in a hint of what may be possible, scientists have already created visions inside mouse brains. Using a technique called optogenetics to stimulate small groups of nerve cells, researchers made mice 'see' lines that weren’t there. Those mice behaved exactly as if their eyes had actually seen the lines, says Yuste, whose research group performed some of these experiments. ... "People ought to have the choice to sell or give away their brain data for a product they like, or even for straight up cash [according to Zurich-based bioethicist Marcello Ienca]. 'The human brain is becoming a new asset,' Ienca says, something that can generate profit for companies eager to mine the data. He calls it 'neurocapitalism.' ... "A lack of ethical clarity is unlikely to slow the pace of the coming neurotech rush. But thoughtful consideration of the ethics could help shape the trajectory of what’s to come, and help protect what makes us most human." www.sciencenews.org/article/technology-brain-activity-read-change-thoughts-privacy-ethics Elections and wars are inflection points rich in counterfactuals. The series of maps in this BBC Future article considers alternate histories, which have become a field of serious historical scholarship: what if the U.S. had lost the American Revolution? what if WWII hadn't been fought? what if the states of the United States had splintered in line with some actual proposals? www.bbc.com/future/article/20201104-the-intriguing-maps-that-reveal-alternate-histories
What does it mean to create literature? Is it an exclusively human art? In 2017, for instance, a fiction author partnered with an artificial intelligence algorithm to write a science fiction short story (https://www.wired.com/2017/12/when-an-algorithm-helps-write-science-fiction/). Now The Wall Street Journal has reported that a new AI system known as GPT-3 can write memos, produce business ideas, write letters and short stories in the style of famous people, and "generate news articles that readers may have trouble distinguishing from human-written ones," surprising even its creators.
As humans, especially in the workplace, shift from author to editor (is that a positive development?), GPT-3 and similar programs in the works raise important questions about the future role of humans in the production of the written word, including of what we think of as "literature." (Information about GPT-3 from www.wsj.com/articles/an-ai-breaks-the-writing-barrier-11598068862) This article from MIT Technology Review features an interview with Jess Whittlestone at the University of Cambridge's Leverhulme Centre for the Future of [Artificial] Intelligence. Whittlestone is discussing the need for what she and colleagues are referring to as "ethics with urgency" for AI.
"With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly. ... Compared to something like biomedical ethics, the ethics we have for AI isn’t very practical. It focuses too much on high-level principles. We can all agree that AI should be used for good. But what does that really mean? And what happens when high-level principles come into conflict? For example, AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements. "AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions. ... We need to think about ethics differently. It shouldn’t be something that happens on the side or afterwards—something that slows you down. It should simply be part of how we build these systems in the first place: ethics by design. ... What we’re saying is that machine-learning researchers and engineers need to be trained to think through the implications of what they’re building, whether they’re doing fundamental research like designing a new reinforcement-learning algorithm or something more practical like developing a health-care application. If their work finds its way into real-world products and services, what might that look like? What kinds of issues might it raise? ... NeurIPS [a leading AI conference] now asks researchers to include a statement at the end of their papers outlining potential societal impacts of their work. ... What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. That’s why it’s important for these different groups to get used to working together." www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic From an article in Army Times: "Ear, eye, brain and muscular enhancement is “technically feasible by 2050 or earlier,” according to a study released this month by the U.S. Army’s Combat Capabilities Development Command. ... The report — entitled “Cyborg Soldier 2050: Human/Machine Fusion and the Implications for the Future of the DOD” — is the result of a year-long assessment. It was written by a study group from the DoD Biotechnologies for Health and Human Performance Council, which is tasked to look at the ripple effects of military biotechnology. The team identified four capabilities as technically feasible by 2050:
www.armytimes.com/news/your-army/2019/11/27/cyborg-warriors-could-be-here-by-2050-dod-study-group-says A study of biogeography shows that life exists in a "just right" mix of physical geographic variables -- temperature, precipitation, sunlight, altitude... -- that varies from species to species. For this reason, some astrobiologists are focusing their attention on "eyeball" planets, those that have a transition zone between the too-hot day side and the too-cold night side, a transition zone that might include a "just right" spot for extraterrestrial life: nautil.us/blog/-forget-earth_likewell-first-find-aliens-on-eyeball-planets
The chatter about the need for humanity to become, in the words of Stephen Hawking, "a multi-planetary species" is getting louder. However, this article from the science magazine Nautilus argues that instead of saving humanity, space colonization could accelerate its destruction. "The argument is based on ideas from evolutionary biology and international relations theory... Consider what is likely to happen as humanity hops from Earth to Mars, and from Mars to relatively nearby, potentially habitable exoplanets like Epsilon Eridani b, Gliese 674 b, and Gliese 581 d. Each of these planets has its own unique environments that will drive Darwinian evolution, resulting in the emergence of novel species over time, just as species that migrate to a new island will evolve different traits than their parent species. The same applies to the artificial environments of spacecraft like “O’Neill Cylinders,” which are large cylindrical structures that rotate to produce artificial gravity. Insofar as future beings satisfy the basic conditions of evolution by natural selection—such as differential reproduction, heritability, and variation of traits across the population—then evolutionary pressures will yield new forms of life. But the process of “cyborgization”—that is, of using technology to modify and enhance our bodies and brains—is much more likely to influence the evolutionary trajectories of future populations living on exoplanets or in spacecraft. The result could be beings with completely novel cognitive architectures (or mental abilities), emotional repertoires, physical capabilities, lifespans, and so on. In other words, natural selection and cyborgization as humanity spreads throughout the cosmos will result in species diversification. At the same time, expanding across space will also result in ideological diversification. Space-hopping populations will create their own cultures, languages, governments, political institutions, religions, technologies, rituals, norms, worldviews, and so on. As a result, different species will find it increasingly difficult over time to understand each other’s motivations, intentions, behaviors, decisions, and so on. It could even make communication between species with alien languages almost impossible. ... Thus, as I write in the paper, phylogenetic and ideological diversification will engender a situation in which many species will be “not merely aliens to each other but, more significantly, alienated from each other.” ... [E]xtreme differences like those just listed will undercut trust between species."
nautil.us/blog/why-we-should-think-twice-about-colonizing-space In 1979, 10 years after the first moon landing, the United Nations adopted the "Agreement Governing the Activities of States on the Moon and Other Celestial Bodies," also known as the Moon Treaty. The treaty was designed to prohibit the militarization, commercialization, and colonization of bodies in our solar system, including the moon, without the consent of the international community. This map looks at the handful of states that are parties to the treaty (in green) or that have signed the treaty but are not necessarily bound by it (in blue). www.statista.com/chart/18738/countries-that-are-signatories-or-parties-to-the-1979-moon-treaty/
"In May, a group of international scientists assembled near Washington, D.C., to tackle an alarming problem: what to do about an asteroid hurtling toward Earth. ... True, the chances of a civilization-destroying asteroid impact are exceedingly small, at least in the foreseeable future. Asteroid strikes that cause regional devastation and catastrophic global climate change occur, on average, only about once every 100,000 years or more. ... Over the past two decades, asteroid hunters with NASA and other international space agencies have identified and tracked the orbits of more than 20,000 asteroids—also known as near-Earth objects—that pass through our neighborhood as they orbit the sun. Of those, about 2,000 are classified as potentially hazardous—asteroids that are large enough (greater than 150 yards in diameter) to cause local destruction and that come close enough to Earth to someday pose a threat. ... On an unlucky Friday the 13th in April 2029, the thousand-foot-wide asteroid Apophis will pass a mere 19,000 miles from Earth—closer than the satellites that bring us DISH TV. But here’s the bad news: Hundreds of thousands of other near-Earth asteroids, both large and small, haven’t been identified. We have no idea where they are and where they are going. On Feb. 15, 2013, a relatively small, 60-foot-wide asteroid traveling at 43,000 mph exploded in the atmosphere near the Russian city of Chelyabinsk, sending out a blast wave that injured 1,500 people. No one had seen the asteroid coming. ... Nor is it clear that we could deflect a small but dangerous asteroid heading our way even if we did spot it. No asteroid-deflection method has ever been tested in real-space conditions.... Over its 4.5 billion-year history, Earth has been hit millions of times by powerful asteroids, and it will inevitably be hit again—whether two centuries from now or next Tuesday. So it isn’t a question of whether humankind will have to confront the prospect of a destructive asteroid hurtling our way; it is only a question of when." www.wsj.com/articles/the-asteroid-peril-isnt-science-fiction-11562339356
Should artificial intelligence have rights? This article by a pair of U.S. philosophy professors suggests that the issue has important parallels in scientific ethics.
"Universities across the world are conducting major research on artificial intelligence (AI), as are organisations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AIs might deserve the ethical protections we typically give to animals. "Discussions of ‘AI rights’ or ‘robot rights’ have so far been dominated by questions of what ethical obligations we would have to an AI of humanlike or superior intelligence – such as the android Data from Star Trek or Dolores from Westworld. But to think this way is to start in the wrong place, and it could have grave moral consequences. Before we create an AI with humanlike sophistication deserving humanlike ethical consideration, we will very likely create an AI with less-than-human sophistication, deserving some less-than-human ethical consideration. We are already very cautious in how we do research that uses certain nonhuman animals. Animal care and use committees evaluate research proposals to ensure that vertebrate animals are not needlessly killed or made to suffer unduly. ... Biomedical research is carefully scrutinised, but AI research, which might entail some of the same ethical risks, is not currently scrutinised at all. Perhaps it should be. "Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them. ... In the case of research on animals and even on human subjects, appropriate protections were established only after serious ethical transgressions came to light (for example, in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study). With AI, we have a chance to do better. "We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research." aeon.co/ideas/ais-should-have-the-same-ethical-protections-as-animals Sometimes the most interesting questions in moral philosophy arise not from teasing apart right vs. wrong but from balancing right vs. right. For example, which is the more important good: privacy or security? People of good conscience may have different opinions on that question, which is at the heart of the use of facial recognition software, for instance (and many science fiction plot lines). Although the use of facial recognition software has proliferated in the U.S. and elsewhere over the last several years, last month San Francisco became the first major city to ban the use of facial recognition software by law enforcement, arguing that privacy from government observation outweighed any security benefits the technology might convey.
www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html Scenario A: Scientists discover a compound that protects against Alzheimers. Scenario B: Scientists discover a compound that enhances human cognition. Americans' ethical intuition tends to support cure and prevention (and Scenario A) but frown upon performance enhancement (and Scenario B).
But what happens when it is the same compound in both scenarios? This article looks at the research and ethical questions surrounding the hormone Klotho, named for one of the Greek Fates: www.nytimes.com/2019/04/02/health/klotho-brain-enhancement-dementia-alzheimers.html The future of artificial intelligence is being steered by just a handful of companies around the world: Google, Microsoft, Amazon, Facebook, IBM, and Apple (the so-called G-MAFIA) and Baidu, Alibaba, and Tencent (all in China). This article from Science News reviews the new book The Big Nine about the nine biggest players in AI, their leadership, their interests, their incentives, and their possible impact on the future. www.sciencenews.org/article/nine-companies-steering-future-artificial-intelligence
Will interacting with Alexa or Siri make our kids ruder? Will bots designed to maximize return make us likely to behave less generously? Will we tell our digital assistants things we will not tell our friends and partners? This article from The Atlantic suggests the answers may depend, in part, on how the artificial intelligence we will be interacting with has been designed.
"Radical innovations have previously transformed the way humans live together, of course. The advent of cities sometime between 5,000 and 10,000 years ago meant a less nomadic existence and a higher population density. We adapted both individually and collectively (for instance, we may have evolved resistance to infections made more likely by these new circumstances). More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information. As consequential as these innovations were, however, they did not change the fundamental aspects of human behavior that comprise what I call the “social suite”: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching. The basic contours of these traits remain remarkably consistent throughout the world, regardless of whether a population is urban or rural, and whether or not it uses modern technology. But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another. Consider some experiments from my lab at Yale, where my colleagues and I have been exploring how such effects might play out." www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204 What gives life meaning? For some philosophers (and writers, as students in my online comparative science fiction class discovered this week), it is mortality itself. This article, co-published by The New York Times and New Philosopher magazine (Australia) elaborates on this point:
"Consider this fact of modern life: Nearly all of the technological products that we buy and use are designed with planned obsolescence in mind. They are built specifically to fail after a relatively short period — one year, two, maybe five. If you doubt that, think about how often you have to replace your smartphone. Gadgets are designed to die.. ... In her new book, 'Natural Causes: An Epidemic of Wellness, the Certainty of Dying, and Killing Ourselves to Live Longer,' Barbara Ehrenreich writes: 'You can think of death bitterly or with resignation, as a tragic interruption of your life, and take every possible measure to postpone it. Or, more realistically, you can think of life as an interruption of an eternity of personal nonexistence, and seize it as a brief opportunity to observe and interact with the living, ever-surprising world around us.' I was taken by Ms. Ehrenreich’s formulation, this notion that our experience of life, though unique to us, is just part of a broader continuum. Our time here is but a blip, and when we leave, the great world continues to spin. As such, the appreciation of our own lives has much to do with the ever-increasing awareness of its relative brevity. It is this — an awareness and acceptance of our own mortality — that makes us human. And it is the impetus, I’d argue, for living our lives to the fullest. ... It is rare for us to give much thought to the challenges we would face if there were no end to our time on earth. Would the condition of our bodies affect the condition of our minds? Would everyone live forever, or just those with the means to afford it? Could you opt out of eternal life? Would inequality dissolve, or would it become even more of an intractable problem? Would we still gain the empathy, wisdom and insight that can come with age? Technological breakthroughs can be life-changing. But I believe that our humanity — our humanness — is inextricably intertwined with the fact of our mortality. And no scientific fountain of youth can ever cause that to change." www.nytimes.com/2018/08/18/opinion/life-is-short-thats-the-point.html The issue of who can do what in space is taking on more urgency as a growing number of private companies and individual countries are launching missions to explore, colonize, commercialize, and militarize space. This geo-graphic shows the mix of countries that currently have satellites in orbit and the uses of those satellites. It is worth noting that multinational collaborations rank fourth on the list. www.statista.com/chart/17107/countries-with-the-most-satellites-in-space/
There's little doubt that artificial intelligence is becoming more human: AI now lies and conceals information in order to succeed.
"Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” ... In some early results, the [AI] agent was doing well — suspiciously well. ... Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one. The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map. So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect. In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! ... This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new." techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task Many, many countries have plans for space missions. India has plans for a lunar colony, and China recently launched its mission to the far side of the moon. What we lack is a plan for how we behave when we get to space. At present, “space” is seen as something that belongs to all of humanity more or less equally because no one can enforce a claim to any part of it. But with the technology and economics to make this possible closer by the day, little thought has been given to the rules of engagement -- should the countries, companies, or individuals who do go to space be allowed to do whatever they want there (e.g., mine asteroids, build resorts, build colonies, build prisons, launch missions into deep space)? My online high school literature class ("Who We Are & What We Dream: Comparative Science Fiction") recently considered these questions. This article, though, argues this issue is not science fiction: it is a crucial policy concern right now aeon.co/essays/we-urgently-need-a-legal-framework-for-space-colonisation
For Halloween: some people find scary giving artificial intelligence the discretion to kill.
"An autonomous missile under development by the Pentagon uses software to choose between targets. An artificially intelligent drone from the British military identifies firing points on its own. Russia showcases tanks that don’t need soldiers inside for combat. A.I. technology has for years led military leaders to ponder a future of warfare that needs little human involvement. But as capabilities have advanced, the idea of autonomous weapons reaching the battlefield is becoming less hypothetical. The possibility of software and algorithms making life-or-death decisions has added new urgency to efforts by a group called the Campaign To Stop Killer Robots that has pulled together arms control advocates, humans rights groups and technologists to urge the United Nations to craft a global treaty that bans weapons without people at the controls. Like cyberspace, where there aren’t clear rules of engagement for online attacks, no red lines have been defined over the use of automated weaponry. Without a nonproliferation agreement, some diplomats fear the world will plunge into an algorithm-driven arms race. In a speech at the start of the United Nations General Assembly in New York on Sept. 25, Secretary General António Guterres listed the technology as a global risk alongside climate change and growing income inequality. 'Let’s call it as it is: The prospect of machines with the discretion and power to take human life is morally repugnant,' Mr. Guterres said. ... "In 2016, the Pentagon highlighted its capabilities during a test in the Mojave Desert. More than 100 drones were dropped from a fighter jet in a disorganized heap, before quickly coming together to race toward and encircle a target. ... The drones were programmed to communicate with each other independently to collectively organize and reach the target. 'They are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,' William Roper, director of the Pentagon’s strategic capabilities office, said at the time. To those fearful of the advancement of autonomous weapons, the implications were clear. 'You’re delegating the decision to kill to a machine,' said Thomas Hajnoczi, the head of disarmament department for the Austrian government. 'A machine doesn’t have any measure of moral judgment or mercy.'" www.nytimes.com/2018/10/19/technology/artificial-intelligence-weapons.html For Halloween, Philosophy Now (UK) has devoted its October/November issue to a variety of philosophical issues raised by Mary Shelley's Frankenstein. philosophynow.org/issues/128
Technology is typically neither moral nor immoral. It depends on the use to which it is put. The Pentagon's Defense Advanced Research Projects Agency (DARPA) -- one of the earliest funders of the precursor to the internet, among other things -- recently released a report about its Insect Allies research program. Insect Allies proposes speeding up genetic modification of crops, for protective purposes, by having "millions of insects carrying viruses descend upon crops and then genetically modify them." Surprising no one but DARPA, perhaps, a group of independent scientists and lawyers published a warning in Science arguing that the Insect Allies project should be shut down: the research may run afoul of a 1975 treaty banning biological weapons and could be used offensively, by the U.S. and other countries. Are some research directions best left unexplored? Who decides? This was an issue students in my online comparative science fiction class wrestled with this week. www.nytimes.com/2018/10/04/science/darpa-gene-editing.html
Immortality is a central element of many religions and more than one science fiction plot line. But is it desirable? This article looks at the arguments against immortality, from boredom to its paradoxical undermining of what humans want in the first place.
"[In 1973] the English moral philosopher Bernard Williams suggested that living forever would be awful, akin to being trapped in a never-ending cocktail party. This was because after a certain amount of living, human life would become unspeakably boring. We need new experiences in order to have reasons to keep on going. But after enough time has passed, we will have experienced everything that we, as individuals, find stimulating. ... The moral philosopher Samuel Scheffler at New York University has suggested that the real problem with a fantasy of immortality is that it doesn’t make sense as a coherent desire. Scheffler points out that human life is intimately structured by the fact that it has a fixed (even if usually unknown) time limit. We all start with a birth, then pass through many stages of life, before definitely ending in death. In turn, Scheffler argues, everything that we value – and thus can coherently desire in an essentially human life – must take as given the fact that we are temporally bounded beings. Sure, we can imagine what it would be like to be immortal, if we find that an amusing way to pass the time. But doing so will obscure a basic truth: that because death is a fixed fact, everything that human beings value makes sense only in light of our time being finite, our choices being limited, and our each getting only so many goes before it’s all over. Scheffler’s case is thus not simply that immortality would make us miserable (although it probably would). It’s that, if we had it, we would cease to be distinctively human in the way that we currently are. But then, if we were somehow to attain immortality, it wouldn’t get us what we want from it: namely, for it to be some version of our human selves that lives forever." aeon.co/essays/theres-a-big-problem-with-immortality-it-goes-on-and-on |
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
December 2023
Categories
All
|