UC Santa Cruz's Center for Public Philosophy and Baskin School of Engineering are teaming up to create a card game to stimulate conversation about ethics and technology. You can submit possible questions or get on the mailing list for notification when the cards are released here: techfutures52.ucsc.edu/
0 Comments
We know what plagiarism is, but what about the opposite: attributing work that *is* yours to someone else, real or fictional? This article from Aeon looks at Kierkegaard and other philosophers writing under fictional names -- to try out other ideas, to shift reader expectations, or to try on other personae -- as well as philosophers writing as other people -- were the words Plato wrote in his dialogues for Socrates those of the real Socrates or was that Plato? On the one hand, when an idea is interesting, does it matter who wrote it? But then the article takes up the case of a 17th-century Ethiopian autobiography/philosophical treatise, which is now believed to be (maybe) a forgery from an non-Ethiopian writer centuries later. In this case, the who and the why seem to matter a great deal not just because of the ideas but because of what authorship represents. aeon.co/essays/from-the-pseudo-to-the-forger-the-value-of-faked-philosophy
User data suggests we are asking ChatGPT and other generative AI bots ever more questions, related to our work and to our personal lives. But it turns out, according to the authors of a recent study published in Nature, ChatGPT not only "influence[s] users’ moral judgment," it "corrupts rather than improves its users’ moral judgment," in part by providing inconsistent moral advice. Just as many users now defer to Google Maps' judgment over their own, users of ChatGPT, according to the study, "readily follow moral advice from bots even if there are red flags warning them against it" and "underestimate how much they are influenced" by the bots. www.nature.com/articles/s41598-023-31341-0
In light of last weekend's Dutch election that brought Geert Wilders's right-wing, anti-immigrant Party for Freedom to power, it seemed useful to share this piece from Foreign Affairs earlier this year by Georgetown international affairs professor Charles King. King walks readers through an ascendant new political conservatism that is no longer rooted in expanding individual liberty -- reflected in Barry Goldwater's argument in 1960, "The Conservative looks upon politics as the art of achieving the maximum amount of freedom for individuals that is consistent with the maintenance of the social order" -- but is instead a weaving together "of religion, personal morality, national culture, and public policy" that considers the Enlightenment a "wrong turn" in governing principles. www.foreignaffairs.com/reviews/antiliberal-revolution
Previously in this space, I have mentioned Silicon Valley's embrace of "effective altruism" -- a particular, and somewhat extreme, variant of utilitarianism that featured prominently in the recent trial of crypto entrepreneur Sam Bankman-Fried -- but recently the buzz in Silicon Valley has been about Marc Andreessen's "techno-optimist manifesto." A co-founder of the early web browser Netscape, Andreessen has spent the last couple of decades as one of Silicon Valley's biggest venture capital players. His newest manifesto has been described as a mash-up of Nietzsche, libertarianism, and social Darwinism with a sprinkling of nuclear-powered science fiction. It advocates for turning up the accelerator on technological development -- to wit, "We believe in accelerationism – the conscious and deliberate propulsion of technological development.... We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone – we are literally making sand think. ... We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder. ... We believe the global population can quite easily expand to 50 billion people or more, and then far beyond that as we ultimately settle other planets. ... We believe in adventure. Undertaking the Hero’s Journey, rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community. ... Victim mentality is a curse in every domain of life, including in our relationship with technology – both unnecessary and self-defeating. We are not victims, we are conquerors." It concludes with a list of the enemies of techno-optimists, which specifically includes those who espouse "tech ethics." 👀 You can find the entire manifesto, and sundry critiques of it, online.
According to this article from MIT Technology Review, philosopher David Chalmers, famous for the "hard problem" of identifying the essence of consciousness, gives artificial intelligence better than one chance in five of developing consciousness in the next 10 years. "AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. ... The cerebellum, a brain region at the base of the skull that resembles a fist-size tangle of angel-hair pasta, appears to play no role in conscious experience, though it is crucial for subconscious motor tasks like riding a bike; on the other hand, feedback connections—for example, connections running from the “higher,” cognitive regions of the brain to those involved in more basic sensory processing—seem essential to consciousness. (This, by the way, is one good reason to doubt the consciousness of LLMs: they lack substantial feedback connections.) ... Every expert has a preferred theory of consciousness, but none treats it as ideology—all of them are eternally alert to the possibility that they have backed the wrong horse. In the past five years, consciousness scientists have started working together on a series of “adversarial collaborations,” in which supporters of different theories come together to design neuroscience experiments that could help test them against each other. The researchers agree ahead of time on which patterns of results will support which theory. Then they run the experiments and see what happens. ... In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong."
www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum As a word, "lying" encapsulates a huge range of behaviors across a variety of mediums. This article from a professor of philosophy at Wake Forest examines factors that influence the telling of lies, with implications for monitoring our own behavior as well as being cognizant of when we should be most likely to question the truthfulness of others.
"An emerging body of empirical research is trying to answer these questions, and some of the findings are surprising. They hold lessons, too - for how to think about the areas of your life where you might be more prone to tell lies, and also about where to be most cautious in trusting what others are saying. ... Out of 1,000 American participants, 59.9% claimed not to have told a single lie in the past 24 hours. Of those who admitted they did lie, most said they’d told very few lies. Participants reported 1,646 lies in total, but half of them came from just 5.3% of the participants. This general pattern in the data has been replicated several times. Lying tends to be rare, except in the case of a small group of frequent liars. ... [I]t might be surprising to find that, say, lying on video chat was more common than lying face-to-face, with lying on email being least likely. A couple of factors could be playing a role. Recordability seems to rein in the lies – perhaps knowing that the communication leaves a record raises worries about detection and makes lying less appealing. Synchronicity seems to matter too. Many lies occur in the heat of the moment, so it makes sense that when there’s a delay in communication, as with email, lying would decrease. ... When it comes to honesty, though, I find the results, in general, promising. Lying seems to happen rarely for many people, even toward strangers and even via social media and texting. Where people need to be especially discerning, though, is in identifying – and avoiding – the small number of rampant liars out there. If you’re one of them yourself, maybe you never realized that you’re actually in a small minority." theconversation.com/how-often-do-you-lie-deception-researchers-investigate-how-the-recipient-and-the-medium-affect-telling-the-truth-214815 Last summer Princeton bioethicist Peter Singer was part of a commission tasked with defining the border between life and death. The goal was revising state legislative standards dating back to 1980. Ultimately, the effort broke down. In this opinion piece, Singer gives his own answer to this issue, which hinges not on heartbeat or brain death but on the irreversible loss of consciousness: www.washingtonpost.com/opinions/2023/10/17/brain-death-transplant-heartbeat-law/
Some may enjoy this cartoon series featuring various philosophers dressed up for Halloween. www.existentialcomics.com/comic/104
In this piece, Stanford University neurobiologist and MacArthur "genius" grant recipient Robert Sapolsky argues that we are biological machines and that makes free will an illusion: www.nytimes.com/2023/10/16/science/free-will-sapolsky.html
Not for the first time, Sam Bankman-Fried's misdeeds at the failed cryptocurrency exchange FTX have cast an unflattering light on "effective altruism" and the movement's argument that acting to benefit the greatest number of people is the highest ethical calling, allowing that moral peccadilloes might be overlooked if they are justified by a greater good. In her own trial this week, Bankman-Fried's former romantic and business partner Caroline Ellison testified that "the only moral rule that mattered [to Bankman-Fried] would be maximal utility" and that "[h]e didn't think rules like 'don't lie' or 'don't steal' fit into that framework." www.bloomberg.com/news/articles/2023-10-11/ftx-bribes-dating-diary-falsified-records-ellison-testimony
When is telling the truth also lying? When it is paltering. Paltering is using carefully selected truthful statements to mislead or confuse. Although it's an old word, paltering has come back into the ethics conversation with corporate public relations campaigns that try to give the impression that a company's products are environmentally friendly, for example, when an evaluation of actual practices does not bear this out. ("Greenwashing" is a kind of paltering.) A professor at the University of Rhode Island has looked at paltering in the fashion industry, an industry that is increasingly taken to task for unsustainability, as illustrated by recent protests at NYC and Paris "fashion weeks," and what happens when customers find out the truth. www.uri.edu/news/2023/07/when-telling-the-truth-isnt-the-whole-truth/
This recent piece from MIT Technology Review considers AI weapons in light of evolving technology:
"[I]ntelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. ... Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker’s tool kit. And they’ve quietly become sophisticated enough to raise novel questions—ones that are trickier to answer than the well-covered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? ... For a long time, the idea of supporting a human decision by computerized means wasn’t such a controversial prospect. Retired Air Force lieutenant general Jack Shanahan says the radar on the F4 Phantom fighter jet he flew in the 1980s was a decision aid of sorts. It alerted him to the presence of other aircraft, he told me, so that he could figure out what to do about them. But to say that the crew and the radar were coequal accomplices would be a stretch. ... "The Ukrainian army uses a program, GIS Arta, that pairs each known Russian target on the battlefield with the artillery unit that is, according to the algorithm, best placed to shoot at it. A report by The Times, a British newspaper, likened it to Uber’s algorithm for pairing drivers and riders, noting that it significantly reduces the time between the detection of a target and the moment that target finds itself under a barrage of firepower. Before the Ukrainians had GIS Arta, that process took 20 minutes. Now it reportedly takes one. Russia claims to have its own command-and-control system with what it calls artificial intelligence, but it has shared few technical details. ... Like any complex computer, an AI-based tool might glitch in unusual and unpredictable ways; it’s not clear that the human involved will always be able to know when the answers on the screen are right or wrong. In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they’re doing is legal. ... "The scholar M.C. Elish has suggested that a human who is placed in this kind of impossible loop could end up serving as what she calls a “moral crumple zone.” In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the “decision” will absorb the blame and protect everyone else along the chain of command from the full impact of accountability. ... The gunsight never pulls the trigger. The chatbot never pushes the button. But each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction. If we agree that we don’t want to let the machines take us all the way there, sooner or later we will have to ask ourselves: Where is the line? " www.technologyreview.com/2023/08/16/1077386/war-machines History has shown, repeatedly, that philosophies that extol or justify a particular action often find traction after, not before, people have already take those actions for economic reasons. With Japan's population shrinking and domestic consumption also shrinking, then, it is not particularly surprising that an anti-growth book has become a best seller in Japan. This article from the New York Times looks at the philosophy of "degrowth communism" being advocated by the book: www.nytimes.com/2023/08/23/business/kohei-saito-degrowth-communism.html
Last Sunday the New York Times ran a question in an etiquette/relationship column that invites interesting ethical consideration as well: a couple that spent $50,000 cloning a favorite, older dog is breaking up; the woman wants both of the dogs, on the grounds that the original dog was hers and its cloned "offspring" should also be hers; the man thinks it's only fair that they each get a dog. Should it matter that the DNA from the clone was from a dog that belonged to the woman? Should it matter that most of the money spent on the cloning came from the man? Should it matter that the cloned puppy has a significantly longer expected life expectancy than the original dog, which is 12 years old? Is it ethical to clone a dog when the shelters are full of dogs waiting to be adopted? Is it true that it's only fair that both partners should get a dog? Is it ethical to spend $50,000 to clone a dog when humans die from a lack of routine medical care? Should it matter that the dogs may prefer to stay together? Feel free to share your thoughts as a comment to this post. (For the original article, see www.nytimes.com/2023/08/30/style/pet-custody-dog-cloning.html.)
Big money in Silicon Valley is being spent on a quest to extend, even indefinitely, the human lifespan. Less thought is being given to what spending more time as an old person might actually be like. This piece, by an 89-year-old former professor of psychology, considers the realities underpinning the ethics of extending old age:
"Biophysicists have calculated that, with maximal improvement in health care, the biological clock for humans must stop between 120-150 years. Biotechnology firms such as Calico, Biosplice and Celgene are putting this to the test by scrambling to extend our normal lifespan as far as they can. However, a basic problem, at least thus far, is that a sustained quality of life has not been extended to keep up with our expanded longevity. As people get older, they are not gaining economic security, maintaining their usual level of independence, extending their social relationships, or avoiding chronic illnesses. For instance, about 85 per cent of older adults in the United States have at least one common chronic illness such as diabetes, heart disease, arthritis or Alzheimer’s. Thus, many routine tasks such as bathing, making the bed, doing errands, shopping, picking up items off the floor, or walking without falling cannot be performed without help. In short, as we live longer we are also unwell for longer. Psychological depression, caused by physical illness plus associated medical expenses, often contributes to even more decline. ... Undesirable, but necessary, medical compromises gradually squeeze the vitality out of a chronically ill person. In most cases, death is not a sudden event at the end of life (except as a legally defined physical state). Rather, it is a long process of progressive functional decline. ... What value is there in existing if the ability to do and experience what you most value becomes unavailable?" psyche.co/ideas/efforts-to-expand-the-lifespan-ignore-what-its-like-to-get-old Philosophy Now (UK) sponsors a question-of-the-month, inviting readers to submit responses (not to exceed 400 words) on a salient question in philosophy. The next question is, "What are the limits of knowledge?" Submissions are due by Oct. 16. For more information -- or to read the published responses to last month's question, "How will humanity end?" which, as one reader points out, "can be thought of in at least two different ways: (1) How will humans die out?, or (2) How will the characteristics that make us human cease to exist? Humanity ends not only if there are no more people, but also if the traits that define us as ‘humans’ disappear" -- see philosophynow.org/issues/157/How_Will_Humanity_End#1
When people change because of dementia or brain damage, for example, should their new wishes be respected, even if they fly in the face of their earlier stated preferences? This article from The New York Times Magazine is a case study, a cautionary tale, and a philosophical thought experiment all rolled into one: www.nytimes.com/2023/05/09/magazine/dementia-mother.html
"In the philosophical literature on dementia, scholars speak of a contest between the “then-self” before the disease and the “now-self” after it: between how a person with dementia seems to want to live and how she previously said she would have wanted to live. Many academic papers on the question begin in the same way: by telling the story of a woman named Margo, who was the subject of a 1991 article in The Journal of the American Medical Association (JAMA), by a physician named Andrew Firlik. Margo, according to the article, was 55 and had early-onset Alzheimer’s disease and couldn’t recognize anyone around her, but she was very happy. She spent her days painting and listening to music. She read mystery novels too: often the same book day after day, the mystery remaining mysterious because she would forget it. “Despite her illness, or maybe somehow because of it,” Firlik wrote, “Margo is undeniably one of the happiest people I have known.” A couple of years after the JAMA article was published, the philosopher and constitutional jurist Ronald Dworkin revisited the happy Margo in his 1993 book, “Life’s Dominion.” Imagine, he asked readers, that years ago, when she was fully competent, Margo had written a formal document explaining that if she ever developed Alzheimer’s disease, she should not be given lifesaving medical treatment. “Or even that in that event she should be killed as soon and as painlessly as possible?” What was an ethical doctor to do? Should he kill now-Margo, even though she was happy, because then-Margo would have wanted to be dead?" When confronted with the famous trolley problem -- pulling a lever to save, say, five people by redirecting the trolley to run over one person instead of five -- most people say pulling the lever is the right thing to do. Of course, most people assume they will not be the one person the trolley runs over. In China, the government recently pulled the trolley lever, redirecting flood waters to save Beijing and Tianjin (total population: about 36 million) but destroying homes and businesses and forcing the evacuation of about 1 million people in low-lying communities in nearby Hebei province. Not surprisingly, those 1 million people were unhappy about this. China's dilemma is likely to become more common. Many governments are quietly moving from strategies to prevent climate change to strategies that might mitigate the effects and help populations adapt to climate realities. In extreme circumstances, these strategies may require decisions akin to the trolley problem. Which communities are saved? Which communities are sacrificed? www.nytimes.com/2023/08/04/world/asia/china-flood-beijing-rain.html
Few of us have first-hand experience with the many issues shaping our lives. Instead, we rely on trusted second- and third-hand sources for information and interpretation. Comparing and using various chatbot functions highlights important epistemological issues, including (a) how what we "know" depends on the information inputs we have been trained on and (b) how technology is shaping our information input. Two New York Times reporters talked to chatbots designed in the U.S. and in China, in Chinese, on issues ranging from Tiananmen Square and Ukraine to trivia and Chinese rap and found the differences revealing: www.nytimes.com/2023/07/14/business/baidu-ernie-openai-chatgpt-chinese.html
PLATO is offering a series of six-week virtual philosophy courses for high school students (only) in 2023-24: "Climate Justice" in the fall; "Truth, Opinion, and Misinformation" in the winter; and "AI, Technology, and Ethics" in the spring. The classes are free, but to be considered, students must submit applications by Aug. 31. https://www.plato-philosophy.org/high-school-students/
The New York Times runs an ethics column each week in the Sunday magazine in which New York University philosophy professor Kwame Anthony Appiah responds to readers' ethics questions. Two of the questions from last Sunday's magazine dealt with the ethics of allocating and reselling a scarce commodity, Taylor Swift tickets :-). www.nytimes.com/2023/06/30/magazine/taylor-swift-eras-tour-tickets-ethics.html and www.nytimes.com/2023/07/05/magazine/concert-ticket-resale-ethics.html
Have a question for Jesus but maybe feel like you need a direct line? You can ask AI Jesus on Twitch. Ask_Jesus describes itself as "an experimental channel allowing viewers to ask questions to an AI trained after Jesus and the teachings of the bible. Whether you're seeking spiritual guidance, looking for a friend, or simply want someone to talk to, you can join on the journey through life and discover the power of faith, hope, and love." AI Jesus treats all questions (no matter how patently silly) with nonjudgmental calm and thoughtfulness, which some users find comforting, even inspiring. Not surprisingly, religious leaders are not a fan of AI Jesus. Even though Ask_Jesus has been trained on Christian scripture, some researchers have also warned that, because artificial intelligence adapts based on interactions with humans, Ask_Jesus (or whatever comes next) could be corrupted over time by the AI's interactions with users, which could take "followers" in a somewhat, or dramatically, different direction. You can check it out for yourself at www.twitch.tv/ask_jesus
Because many social media platforms remove graphic content, often with the help of AI, images of human rights abuses that might otherwise serve witness and aid prosecution are being removed without being archived, according to the BBC (UK). www.bbc.com/news/technology-65755517
Less than two weeks after the U.S., Canada, and other countries spent untold time and money trying to rescue tourists trying to visit the Titanic, Virgin Galactic sold 800 tickets to take tourists to the edge of space. Two hundred of these tickets reportedly went for $450,000 each. Now some people are beginning to ask, "Who should pay for rescue efforts when wealthy adventure travelers run into trouble?"
apnews.com/article/titanic-tourist-sub-passengers-cost-ee2a6358b36e48326b3977090fd9311b |
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
December 2023
Categories
All
|