This recent piece from MIT Technology Review considers AI weapons in light of evolving technology:
"[I]ntelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. ... Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? ... For a long time, the idea of supporting a human decision by computerized means wasn’t such a controversial prospect. Retired Air Force lieutenant general Jack Shanahan says the radar on the F4 Phantom fighter jet he flew in the 1980s was a decision aid of sorts. It alerted him to the presence of other aircraft, he told me, so that he could figure out what to do about them. But to say that the crew and the radar were coequal accomplices would be a stretch. ... "The Ukrainian army uses a program, GIS Arta, that pairs each known Russian target on the battlefield with the artillery unit that is, according to the algorithm, best placed to shoot at it. A report by The Times, a British newspaper, likened it to Uber’s algorithm for pairing drivers and riders, noting that it significantly reduces the time between the detection of a target and the moment that target finds itself under a barrage of firepower. Before the Ukrainians had GIS Arta, that process took 20 minutes. Now it reportedly takes one. Russia claims to have its own command-and-control system with what it calls artificial intelligence, but it has shared few technical details. ... Like any complex computer, an AI-based tool might glitch in unusual and unpredictable ways; it’s not clear that the human involved will always be able to know when the answers on the screen are right or wrong. In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they’re doing is legal. ... "The scholar M.C. Elish has suggested that a human who is placed in this kind of impossible loop could end up serving as what she calls a “moral crumple zone.” In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the “decision” will absorb the blame and protect everyone else along the chain of command from the full impact of accountability. ... The gunsight never pulls the trigger. The chatbot never pushes the button. But each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction. If we agree that we don’t want to let the machines take us all the way there, sooner or later we will have to ask ourselves: Where is the line? " www.technologyreview.com/2023/08/16/1077386/war-machines
0 Comments
History has shown, repeatedly, that philosophies that extol or justify a particular action often find traction after, not before, people have already take those actions for economic reasons. With Japan's population shrinking and domestic consumption also shrinking, then, it is not particularly surprising that an anti-growth book has become a best seller in Japan. This article from the New York Times looks at the philosophy of "degrowth communism" being advocated by the book: www.nytimes.com/2023/08/23/business/kohei-saito-degrowth-communism.html
Last Sunday the New York Times ran a question in an etiquette/relationship column that invites interesting ethical consideration as well: a couple that spent $50,000 cloning a favorite, older dog is breaking up; the woman wants both of the dogs, on the grounds that the original dog was hers and its cloned "offspring" should also be hers; the man thinks it's only fair that they each get a dog. Should it matter that the DNA from the clone was from a dog that belonged to the woman? Should it matter that most of the money spent on the cloning came from the man? Should it matter that the cloned puppy has a significantly longer expected life expectancy than the original dog, which is 12 years old? Is it ethical to clone a dog when the shelters are full of dogs waiting to be adopted? Is it true that it's only fair that both partners should get a dog? Is it ethical to spend $50,000 to clone a dog when humans die from a lack of routine medical care? Should it matter that the dogs may prefer to stay together? Feel free to share your thoughts as a comment to this post. (For the original article, see www.nytimes.com/2023/08/30/style/pet-custody-dog-cloning.html.)
Big money in Silicon Valley is being spent on a quest to extend, even indefinitely, the human lifespan. Less thought is being given to what spending more time as an old person might actually be like. This piece, by an 89-year-old former professor of psychology, considers the realities underpinning the ethics of extending old age:
"Biophysicists have calculated that, with maximal improvement in health care, the biological clock for humans must stop between 120-150 years. Biotechnology firms such as Calico, Biosplice and Celgene are putting this to the test by scrambling to extend our normal lifespan as far as they can. However, a basic problem, at least thus far, is that a sustained quality of life has not been extended to keep up with our expanded longevity. As people get older, they are not gaining economic security, maintaining their usual level of independence, extending their social relationships, or avoiding chronic illnesses. For instance, about 85 per cent of older adults in the United States have at least one common chronic illness such as diabetes, heart disease, arthritis or Alzheimer’s. Thus, many routine tasks such as bathing, making the bed, doing errands, shopping, picking up items off the floor, or walking without falling cannot be performed without help. In short, as we live longer we are also unwell for longer. Psychological depression, caused by physical illness plus associated medical expenses, often contributes to even more decline. ... Undesirable, but necessary, medical compromises gradually squeeze the vitality out of a chronically ill person. In most cases, death is not a sudden event at the end of life (except as a legally defined physical state). Rather, it is a long process of progressive functional decline. ... What value is there in existing if the ability to do and experience what you most value becomes unavailable?" psyche.co/ideas/efforts-to-expand-the-lifespan-ignore-what-its-like-to-get-old Philosophy Now (UK) sponsors a question-of-the-month, inviting readers to submit responses (not to exceed 400 words) on a salient question in philosophy. The next question is, "What are the limits of knowledge?" Submissions are due by Oct. 16. For more information -- or to read the published responses to last month's question, "How will humanity end?" which, as one reader points out, "can be thought of in at least two different ways: (1) How will humans die out?, or (2) How will the characteristics that make us human cease to exist? Humanity ends not only if there are no more people, but also if the traits that define us as ‘humans’ disappear" -- see philosophynow.org/issues/157/How_Will_Humanity_End#1
When people change because of dementia or brain damage, for example, should their new wishes be respected, even if they fly in the face of their earlier stated preferences? This article from The New York Times Magazine is a case study, a cautionary tale, and a philosophical thought experiment all rolled into one: www.nytimes.com/2023/05/09/magazine/dementia-mother.html
"In the philosophical literature on dementia, scholars speak of a contest between the “then-self” before the disease and the “now-self” after it: between how a person with dementia seems to want to live and how she previously said she would have wanted to live. Many academic papers on the question begin in the same way: by telling the story of a woman named Margo, who was the subject of a 1991 article in The Journal of the American Medical Association (JAMA), by a physician named Andrew Firlik. Margo, according to the article, was 55 and had early-onset Alzheimer’s disease and couldn’t recognize anyone around her, but she was very happy. She spent her days painting and listening to music. She read mystery novels too: often the same book day after day, the mystery remaining mysterious because she would forget it. “Despite her illness, or maybe somehow because of it,” Firlik wrote, “Margo is undeniably one of the happiest people I have known.” A couple of years after the JAMA article was published, the philosopher and constitutional jurist Ronald Dworkin revisited the happy Margo in his 1993 book, “Life’s Dominion.” Imagine, he asked readers, that years ago, when she was fully competent, Margo had written a formal document explaining that if she ever developed Alzheimer’s disease, she should not be given lifesaving medical treatment. “Or even that in that event she should be killed as soon and as painlessly as possible?” What was an ethical doctor to do? Should he kill now-Margo, even though she was happy, because then-Margo would have wanted to be dead?" When confronted with the famous trolley problem -- pulling a lever to save, say, five people by redirecting the trolley to run over one person instead of five -- most people say pulling the lever is the right thing to do. Of course, most people assume they will not be the one person the trolley runs over. In China, the government recently pulled the trolley lever, redirecting flood waters to save Beijing and Tianjin (total population: about 36 million) but destroying homes and businesses and forcing the evacuation of about 1 million people in low-lying communities in nearby Hebei province. Not surprisingly, those 1 million people were unhappy about this. China's dilemma is likely to become more common. Many governments are quietly moving from strategies to prevent climate change to strategies that might mitigate the effects and help populations adapt to climate realities. In extreme circumstances, these strategies may require decisions akin to the trolley problem. Which communities are saved? Which communities are sacrificed? www.nytimes.com/2023/08/04/world/asia/china-flood-beijing-rain.html
Few of us have first-hand experience with the many issues shaping our lives. Instead, we rely on trusted second- and third-hand sources for information and interpretation. Comparing and using various chatbot functions highlights important epistemological issues, including (a) how what we "know" depends on the information inputs we have been trained on and (b) how technology is shaping our information input. Two New York Times reporters talked to chatbots designed in the U.S. and in China, in Chinese, on issues ranging from Tiananmen Square and Ukraine to trivia and Chinese rap and found the differences revealing: www.nytimes.com/2023/07/14/business/baidu-ernie-openai-chatgpt-chinese.html
PLATO is offering a series of six-week virtual philosophy courses for high school students (only) in 2023-24: "Climate Justice" in the fall; "Truth, Opinion, and Misinformation" in the winter; and "AI, Technology, and Ethics" in the spring. The classes are free, but to be considered, students must submit applications by Aug. 31. https://www.plato-philosophy.org/high-school-students/
The New York Times runs an ethics column each week in the Sunday magazine in which New York University philosophy professor Kwame Anthony Appiah responds to readers' ethics questions. Two of the questions from last Sunday's magazine dealt with the ethics of allocating and reselling a scarce commodity, Taylor Swift tickets :-). www.nytimes.com/2023/06/30/magazine/taylor-swift-eras-tour-tickets-ethics.html and www.nytimes.com/2023/07/05/magazine/concert-ticket-resale-ethics.html
Have a question for Jesus but maybe feel like you need a direct line? You can ask AI Jesus on Twitch. Ask_Jesus describes itself as "an experimental channel allowing viewers to ask questions to an AI trained after Jesus and the teachings of the bible. Whether you're seeking spiritual guidance, looking for a friend, or simply want someone to talk to, you can join on the journey through life and discover the power of faith, hope, and love." AI Jesus treats all questions (no matter how patently silly) with nonjudgmental calm and thoughtfulness, which some users find comforting, even inspiring. Not surprisingly, religious leaders are not a fan of AI Jesus. Even though Ask_Jesus has been trained on Christian scripture, some researchers have also warned that, because artificial intelligence adapts based on interactions with humans, Ask_Jesus (or whatever comes next) could be corrupted over time by the AI's interactions with users, which could take "followers" in a somewhat, or dramatically, different direction. You can check it out for yourself at www.twitch.tv/ask_jesus
Because many social media platforms remove graphic content, often with the help of AI, images of human rights abuses that might otherwise serve witness and aid prosecution are being removed without being archived, according to the BBC (UK). www.bbc.com/news/technology-65755517
Less than two weeks after the U.S., Canada, and other countries spent untold time and money trying to rescue tourists trying to visit the Titanic, Virgin Galactic sold 800 tickets to take tourists to the edge of space. Two hundred of these tickets reportedly went for $450,000 each. Now some people are beginning to ask, "Who should pay for rescue efforts when wealthy adventure travelers run into trouble?"
apnews.com/article/titanic-tourist-sub-passengers-cost-ee2a6358b36e48326b3977090fd9311b In 1995, David Chalmers, then a newly minted philosophy PhD in his late 20s, published an influential article laying out what he termed the "hard problem" of consciousness: what, exactly, is consciousness and from where does the sensation of consciousness arise? A few years later, in 1998, neuroscientist Christof Koch, today chief scientist of the Allen Institute for Brain Science in Seattle, bet Chalmers that in 25 years the "hard problem" would be solved, that scientists would understand the underpinnings of consciousness. On Friday, at the Association for the Scientific Study of Consciousness meeting in New York City, Koch and Chalmers both declared Chalmers the winner of the bet. www.nature.com/articles/d41586-023-02120-8
Morality is in decline. Everyone says so. But is it really? According to a paper published in Nature recently, survey results done in the U.S. and 59 other countries show that respondents have been reporting a decline in morality for at least 70 years. Survey data shows this perception has been shared by people of various political ideologies, races, ages, genders, and educational levels. But when researchers also analyzed decades of surveys asking people to assess their contemporaries' morality, there was no difference across time, suggesting that the perception of moral decline is a persistent illusion and, researchers found, one that can be manipulated. www.nature.com/articles/s41586-023-06137-x
Are the humanities being cast aside just when we need them most? New York Times columnist Maureen Dowd makes this argument:
"Trustees at Marymount University in Virginia voted unanimously in February to phase out majors such as English, history, art, philosophy and sociology. How can students focus on slowly unspooling novels when they have disappeared inside the kinetic world of their phones, lured by wacky videos and filtered FOMO photos? Why should they delve into hermeneutics and epistemology when they can simply exchange flippant, shorthand tweets and texts? In a world where brevity is the soul of social media, what practical use can come from all that voluminous, ponderous reading? ... Strangely enough, the humanities are faltering just at the moment when we’ve never needed them more. Americans are starting to wrestle with colossal and dangerous issues about technology, as A.I. begins to take over the world. ... 'There is no time in our history in which the humanities, philosophy, ethics and art are more urgently necessary than in this time of technology’s triumph,' said Leon Wieseltier, the editor of Liberties, a humanistic journal. 'Because we need to be able to think in nontechnological terms if we’re going to figure out the good and the evil in all the technological innovations. Given society’s craven worship of technology, are we going to trust the engineers and the capitalists to tell us what is right and wrong?'" www.nytimes.com/2023/05/27/opinion/english-humanities-ai.html Until the 1840s, vegetarians were referred to as "Pythagoreans" because, in addition to thinking about right angles, the pre-Socratic Greek philosopher Pythagoras was associated with a school of Greek philosophers that shunned meat eating, in part because Pythagoras believed in the transmigration of the soul, including the migration of the soul between people and animals. www.psychologytoday.com/us/blog/hide-and-seek/202305/how-vegetarianism-was-born-out-of-philosophy-and-mysticism
As more aspects of our lives -- and in this case our books -- are digitized and made available electronically, it is useful to remember that, legally at least, we do not own that digitized content. We are merely licensing it, which allows the service that provides it to change it at will. Just as no one will ask you to approve changes to your email interface, you have no say if your digital "friend" is altered or the words in your ebooks. www.nytimes.com/2023/04/04/arts/dahl-christie-stine-kindle-edited.html
Previously, discussion of machine intelligence/consciousness has come at the issue from the perspective of an artificial being becoming intelligent/conscious. This piece by two professors at Peking University in Philosophy Now (UK) invokes the famous Ship of Theseus paradox (and the sorites paradox) to come at the issue from the other way: an intelligent being becoming a machine. As humans adopt more technological enhancements to their biology -- including neural enhancements, integrations, and even replacements -- at some point a human may become if not a machine at least more artificial than natural, which presumably would yield a conscious, intelligent artificial being. philosophynow.org/issues/155/Can_Machines_Be_Conscious
According to a law professor at the University of Dayton, growing epistemic pluralism – wide-ranging views on empirical facts – and disagreements over epistemic dependence – who constitutes a trusted source of information – are contributing to polarization in American political life.
"Without the government or an official church telling people what to think, we all have to decide for ourselves – and that inevitably leads to a diversity of moral viewpoints. ... [T]he same is true of beliefs about matters of fact. In the U.S., legal rules and social norms attempt to ensure that the state cannot constrain an individual’s freedom of belief, whether that be about moral values or empirical facts. This intellectual freedom contributes to epistemic pluralism. ... Another contributor to epistemic pluralism is just how specialized human knowledge has become. No one person could hope to acquire the sum total of all knowledge in a single lifetime. This brings us to the second relevant concept: epistemic dependence. Knowledge is almost never acquired firsthand, but transmitted by some trusted source. To take a simple example, how do you know who the first president of the United States was? No one alive today witnessed the first presidential inauguration. You could go to the National Archives and ask to see records, but hardly anyone does that. Instead, Americans learned from an elementary school teacher that George Washington was the first president, and we accept that fact because of the teacher’s epistemic authority. There’s nothing wrong with this; everyone gets most knowledge that way. There’s simply too much knowledge for anyone to verify independently all the facts on which we routinely rely. ... However, this raises a tricky problem: Who has sufficient epistemic authority to qualify as an expert on a particular topic? Much of the erosion of our shared reality in recent years seems to be driven by disagreement about whom to believe." theconversation.com/why-cant-americans-agree-on-well-nearly-anything-philosophy-has-some-answers-193055 If a large language model, like GPT-4, is trained on the writings of a famous philosopher, would you be able to have a conversation with that philosopher? Would that revolutionize philosophy? Or would it vitiate philosophy by inserting AI-generated extrapolations? Here's an "interview" with Rene Descartes, with answers generated by GPT-4: jimmyalfonsolicon.substack.com/p/interviews-with-gtp-philosophers
Today marks the 78th anniversary of Dietrich Bonhoeffer's execution by the Nazis. Bonhoeffer was a young German theologian and moral philosopher who, days after Adolf Hilter's 1933 installation as German chancellor, delivered a radio address warning Germans that the idolatrous worship of the Führer (leader) may instead be a cult of the Verführer (misleader, or seducer). Ten years later, from prison, Bonhoeffer penned a letter reflecting on lessons learned over the prior decade, noting that "memory, the recalling of lessons we have learnt, is also part of responsible living." Included in Bonhoeffer's 1943 "reckoning" is this famous passage:
"Folly is a more dangerous enemy to the good than evil. One can protest against evil; it can be unmasked and, if need be, prevented by force. Evil always carries the seeds of its own destruction, as it make people, at the least, uncomfortable. Against folly we have no defence. Neither protests nor force can touch it; reasoning is no use; facts that contradict personal prejudices can simply be disbelieved, they can just be pushed aside as trivial exceptions. So the fool, as distinct from the scoundrel, is completely self-satisfied; in fact, he can easily become dangerous, as it does not take much to make him aggressive. A fool must therefore be treated more cautiously than a scoundrel; we shall never again try to convince a fool by reason, for it is both useless and dangerous. ... If we look more closely, we see that any violent display of power, whether political or religious, produces an outburst of folly in a large part of mankind; indeed, this seems actually to be a psychological and sociological law: the power of some needs the folly of others. ... The fact that the fool is often stubborn must not mislead us into thinking that he is independent. One feels in fact, when talking to him, that one is dealing, not with the man himself, but with slogans, catchwords, and the like, which have taken hold of him. He is under a spell, he is blinded, his very nature is being misused and exploited. Having this become a passive instrument, the fool will be capable of any evil and at the same time incapable of seeing that it is evil." from "After Ten Years: A Reckoning Made at New Year 1943," in Letters & Papers from Prison (ed. Eberhard Bethge) What is "cognitive liberty"? This interview with Duke University professor Nita Farahany about her new book The Battle for Your Brain lays out her argument that what and how we think should be protected from brain-monitoring technology: www.wwno.org/npr-news/npr-news/2023-03-14/this-law-and-philosophy-professor-warns-neurotechnology-is-also-a-danger-to-privacy
Learning to lie is considered an important milestone in child development. Voluntarily restricting one's own lying is considered an important milestone in moral development. Both of those make the recent news about OpenAI's new GPT-4 lying to trick a human into completing a task it could not -- a task designed to block a machine from proceeding -- rich fodder for philosophical discussion. Is this a sign of increasing machine intelligence (and is that good or bad)? How does one embed moral code (and whose moral code) in machine learning? Should this line of experimentation proceed -- and is it even realistic to suggest a halt at this point? www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016
This article from The Atlantic makes the argument that policing language not only removes the emotional power of language, it imagines it has done people a service by changing words instead of taking the necessary action to change lives:
"Equity-language guides are proliferating among some of the country’s leading institutions, particularly nonprofits. The American Cancer Society has one. So do the American Heart Association, the American Psychological Association, the American Medical Association, the National Recreation and Park Association, the Columbia University School of Professional Studies, and the University of Washington. The words these guides recommend or reject are sometimes exactly the same, justified in nearly identical language. This is because most of the guides draw on the same sources from activist organizations: A Progressive’s Style Guide, the Racial Equity Tools glossary, and a couple of others. The guides also cite one another. The total number of people behind this project of linguistic purification is relatively small, but their power is potentially immense. The new language might not stick in broad swaths of American society, but it already influences highly educated precincts, spreading from the authorities that establish it and the organizations that adopt it to mainstream publications, such as this one. Although the guides refer to language “evolving,” these changes are a revolution from above. They haven’t emerged organically from the shifting linguistic habits of large numbers of people. They are handed down in communiqués written by obscure “experts” who purport to speak for vaguely defined “communities,” remaining unanswerable to a public that’s being morally coerced. A new term wins an argument without having to debate. ... "The whole tendency of equity language is to blur the contours of hard, often unpleasant facts. This aversion to reality is its main appeal. Once you acquire the vocabulary, it’s actually easier to say people with limited financial resources than the poor. The first rolls off your tongue without interruption, leaves no aftertaste, arouses no emotion. The second is rudely blunt and bitter, and it might make someone angry or sad. Imprecise language is less likely to offend. Good writing—vivid imagery, strong statements—will hurt, because it’s bound to convey painful truths. ... "The battle against euphemism and cliché is long-standing and, mostly, a losing one. What’s new and perhaps more threatening about equity language is the special kind of pressure it brings to bear. The conformity it demands isn’t just bureaucratic; it’s moral. But assembling preapproved phrases from a handbook into sentences that sound like an algorithmic catechism has no moral value. Moral language comes from the struggle of an individual mind to absorb and convey the truth as faithfully as possible. ... "The rationale for equity-language guides is hard to fault. They seek a world without oppression and injustice. Because achieving this goal is beyond anyone’s power, they turn to what can be controlled and try to purge language until it leaves no one out and can’t harm those who already suffer. ... This huge expense of energy to purify language reveals a weakened belief in more material forms of progress. If we don’t know how to end racism, we can at least call it structural. The guides want to make the ugliness of our society disappear by linguistic fiat. ... The project of the guides is utopian, but they’re a symptom of deep pessimism. They belong to a fractured culture in which symbolic gestures are preferable to concrete actions...." www.theatlantic.com/magazine/archive/2023/04/equity-language-guides-sierra-club-banned-words/673085 |
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
September 2023
Categories
All
|