Learning to lie is considered an important milestone in child development. Voluntarily restricting one's own lying is considered an important milestone in moral development. Both of those make the recent news about OpenAI's new GPT-4 lying to trick a human into completing a task it could not -- a task designed to block a machine from proceeding -- rich fodder for philosophical discussion. Is this a sign of increasing machine intelligence (and is that good or bad)? How does one embed moral code (and whose moral code) in machine learning? Should this line of experimentation proceed -- and is it even realistic to suggest a halt at this point? www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016
0 Comments
This article from The Atlantic makes the argument that policing language not only removes the emotional power of language, it imagines it has done people a service by changing words instead of taking the necessary action to change lives:
"Equity-language guides are proliferating among some of the country’s leading institutions, particularly nonprofits. The American Cancer Society has one. So do the American Heart Association, the American Psychological Association, the American Medical Association, the National Recreation and Park Association, the Columbia University School of Professional Studies, and the University of Washington. The words these guides recommend or reject are sometimes exactly the same, justified in nearly identical language. This is because most of the guides draw on the same sources from activist organizations: A Progressive’s Style Guide, the Racial Equity Tools glossary, and a couple of others. The guides also cite one another. The total number of people behind this project of linguistic purification is relatively small, but their power is potentially immense. The new language might not stick in broad swaths of American society, but it already influences highly educated precincts, spreading from the authorities that establish it and the organizations that adopt it to mainstream publications, such as this one. Although the guides refer to language “evolving,” these changes are a revolution from above. They haven’t emerged organically from the shifting linguistic habits of large numbers of people. They are handed down in communiqués written by obscure “experts” who purport to speak for vaguely defined “communities,” remaining unanswerable to a public that’s being morally coerced. A new term wins an argument without having to debate. ... "The whole tendency of equity language is to blur the contours of hard, often unpleasant facts. This aversion to reality is its main appeal. Once you acquire the vocabulary, it’s actually easier to say people with limited financial resources than the poor. The first rolls off your tongue without interruption, leaves no aftertaste, arouses no emotion. The second is rudely blunt and bitter, and it might make someone angry or sad. Imprecise language is less likely to offend. Good writing—vivid imagery, strong statements—will hurt, because it’s bound to convey painful truths. ... "The battle against euphemism and cliché is long-standing and, mostly, a losing one. What’s new and perhaps more threatening about equity language is the special kind of pressure it brings to bear. The conformity it demands isn’t just bureaucratic; it’s moral. But assembling preapproved phrases from a handbook into sentences that sound like an algorithmic catechism has no moral value. Moral language comes from the struggle of an individual mind to absorb and convey the truth as faithfully as possible. ... "The rationale for equity-language guides is hard to fault. They seek a world without oppression and injustice. Because achieving this goal is beyond anyone’s power, they turn to what can be controlled and try to purge language until it leaves no one out and can’t harm those who already suffer. ... This huge expense of energy to purify language reveals a weakened belief in more material forms of progress. If we don’t know how to end racism, we can at least call it structural. The guides want to make the ugliness of our society disappear by linguistic fiat. ... The project of the guides is utopian, but they’re a symptom of deep pessimism. They belong to a fractured culture in which symbolic gestures are preferable to concrete actions...." www.theatlantic.com/magazine/archive/2023/04/equity-language-guides-sierra-club-banned-words/673085 In 1950, computer science pioneer Alan Turing proposed a conversation-based test to determine if a machine is intelligent or at least exhibiting intelligent behavior similar to a human's. For decades, absent any declared winner of the Turing test, philosophers have debated whether or not a machine that could pass the Turing test would truly be "intelligent" or just simulating intelligence. Recent advances in AI-generated conversation have made this discussion less theoretical, and more ethically murky, because it is increasingly clear -- intelligent or not -- AI bots, which have been trained up on human conversation patterns, can now carry on increasingly sophisticated conversations with humans, to the point of building relationships with, even manipulating, the humans with whom they interact. This astonishing transcript of a recent two-hour conversation between a New York Times reporter and Microsoft's new ChatGPT-powered bot named Sydney is just one case in point: www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
Practice your logical deduction skills with the old-fashioned code-breaking game Bulls and Cows. The game is like Mastermind except there can be no duplicates: www.mathsisfun.com/games/bulls-and-cows.html
Do the old have a moral obligation to move out of the way to make room for the young? Or is even suggesting this perpetuating dangerous bias against the elderly and vulnerable? These may not be theoretical questions much longer in rapidly aging societies. A case in the point is the traction the provocative statements of Yale economist Yusuke Narita have gotten in Japan, where those 65 and older make up roughly 30% of the population and those 80 and older account for 10% of the population. www.nytimes.com/2023/02/12/world/asia/japan-elderly-mass-suicide.html
Would you want to know which day you were going to die? That's the premise of a new science fiction book The Measure in which people all over the world are delivered a box that contains information about how much longer they have left to live. (Would you open the box?) It's also the premise of the website www.death-clock.org. (Will you click on that link?) Science is still not particularly good at making these estimates, which might give us some psychological wiggle room regardless of what Death Clock comes up with, but what will happen as medical estimates improve? Do you want to know when you will die? If you do, would you want the day or just a range? How might the information change the way you live?
When can a promise be changed in the face of altered circumstances? That is the crux of the issue behind major strikes in France this week over the government's proposal to raise the retirement age by 2 years, from 62 to 64, in 2030. According to Stanford's Center for Longevity, half of today's 5-year-olds can expect to live to age 100 -- and, according to the Center for Longevity, we are not ready. Economic impacts are among the most obvious, from personal savings to growth-centric economic models to pension policies. For example, when the forerunner of France's pension system was established in the 1940s, life expectancy in France was less than 60; today French life expectancy is 82, and government spending on pensions comes to slightly more than 14% of GDP. But our attitudes towards aging, purpose, caregiving, the elderly, promise-keeping, even longevity itself may all be revisited in the years ahead.
This article explores the ethics of organ sales and invites readers to consider the extent to which humans are just sophisticated machines and, like other sophisticated machines -- say, your car -- occasionally require replacement parts that, perhaps, the marketplace should supply. Iran's organ-matching nonprofit is a case study. www.wired.com/story/kidney-donor-compensation-market
The University of Texas at Austin's Center for Media Engagement houses a substantial collection of case studies in media ethics that might make for interesting discussion or co-op use: mediaengagement.org/vertical/media-ethics/
One of the questions philosophers, and more recently neuroscientists, have been struggling with for ages is what does it mean to have free will and do we have it? This short video interview with a neuroscientist makes the interesting argument that our conscious selves may not have free will but our unconscious minds might: www.youtube.com/watch?v=wha-BQTu3_4
Ethics aside, what, if anything, does philosophy have to do with the collapse of the cryptocurrency exchange FTX? This opinion piece ties Silicon Valley's favorite philanthropic philosophy, "effective altruism," to Sam Bankman-Fried's approach to FTX:
"Effective altruists claim they strive to use reason and evidence to do the most good possible for the most people. Influenced by utilitarian ethics, they’re fond of crunching numbers to determine sometimes counterintuitive ideas for maximizing a philanthropic act’s effects by focusing on “expected value,” which they believe can be calculated by multiplying the value of an outcome by the probability of it occurring.SBF belonged to the “longtermist” sect of effective altruism, which focuses on events that could pose a long-term existential threat to humanity, like pandemics or the rise of runaway artificial intelligence. The reasoning for this focus is that more people will exist in the future than exist today, and thus the potential to do more good for more people is greater. He also adopted one of the movement’s signature strategies for effecting social change called “earning to give,” in which generating high income is more important than what kind of job one takes, because it enables people to give away more money for philanthropy. As a college student, SBF had lunch with William MacAskill, the most prominent intellectual advocate for effective altruism in the world, and then reportedly went into finance, and then crypto, based on the idea that it would allow him to donate more money. ... "In online conversation with the reporter, SBF referred to his bids in the past to appear regulator-friendly as 'just PR,' and he disavowed some of his previous statements about ethics. There’s some ambiguity in this part and other parts of SBF’s exchange with the reporter, but broadly speaking, one can interpret the meaning of his responses in two ways. The first possibility is that he’s confessing that his entire set of ethical commitments — including effective altruism — is a ruse. ... The second possibility is that he’s saying that he’s extremely committed to effective altruism, and that he would be willing to do anything — including unsavory things — in order to get to what he saw as the greatest good. ... Remarkably, both scenarios are plausible — and damning. ... While these two scenarios reflect different outlooks on the world, both expose something alarming about effective altruism. It is a belief system that bad faith actors can hijack with tremendous ease and one that can lead true believers to horrifying ends-justify-the-means extremist logic. The core reason that effective altruism is a natural vehicle for bad behavior is that its cardinal demands do not require adherents to shun systems of exploitation or to change them; instead, it incentivizes turbo-charging them. ... The value proposition of this community is to think of morality through the prism of investment, using expected value calculations and cost-effectiveness criteria to funnel as much money as possible toward the endpoint of perceived good causes. It's an outlook that breeds a bizarre blend of elitism, insularity and apathy to root causes of problems. This is a movement that encourages quant-focused intellectual snobbery and a distaste for people who are skeptical of suspending moral intuition and considerations of the real world. ... This is a movement in which promising young people are talked out of pursuing government jobs and talked into lucrative private sector jobs because of the importance of 'earning to give.'" www.msnbc.com/opinion/msnbc-opinion/ftx-sbf-effective-altruism-bankman-fried-rcna59172 Are humans worth it? The human population of earth has doubled in the last 50 years to 8 billion while wildlife populations have declined 70%. Les Knight argues that humans should voluntarily work towards their own extinction. (This is not an isolated idea: humans working toward the extinction of the species have also popped up as a subplot in the last two science fiction books I have read.) www.nytimes.com/2022/11/23/climate/voluntary-human-extinction.html
My "Philosophically Speaking" class is designed for teens, but the PLATO program affiliated with the University of Washington is offering Zoom philosophy classes for children ages 8-12 this winter. For more information, see www.plato-philosophy.org/philosophy-for-children-and-youth/?program=zoom-philosophy-classes
The current issue of Philosophy Now (UK) is on the relationship between philosophy and God, from the perspective of both humanists and theists, and includes this thought-provoking essay on the rationality of believing in a god who may not share your values:
"While ‘faith’ is commonly defined by atheists as ‘belief without evidence’, in practice, someone having faith in someone or something implies more than mere intellectual assent, either with or without evidence. Few Christians, Muslims, or Jews would claim to ‘have faith in’ Satan, despite many believing that something called Satan exists. So ‘having faith (in)’ suggests an endorsement of and commitment to a person, idea, or institution. Similarly, the act of ‘trusting’ goes beyond simple affirmation of existence. The entrustor chooses to live as if the entrusted will not betray them. For the theist, ‘faith’ and ‘trust’ are virtual synonyms. ... Having faith in an untrustworthy person or thing is not so uncommon: people often choose to put their faith in romantic partners who repeatedly let them down. Nor is it unheard of for voters to have faith in politicians commonly acknowledged to be corrupt, even by them. However, in both cases, the morality and rationality of maintaining these faith positions are easily criticised. Religious faith, on the other hand, is often given a free pass. Critiquing the claims made by religions and objecting to portrayals of God are common; but questioning the rationality of having faith in an untrustworthy God even if that God turns out to be real is less common: 'My God might look like a monster – a violent bully who once demanded racial cleansing and who allows great suffering in the world; but if he or she is real, you had better follow him or her' – or so the argument goes. ... Setting aside the fact that many competing groups claim their God punishes those who are not loyal to their specific religion, a person who decides to follow one particular frightening and morally incomprehensible deity still has little reason to trust that this God would not deceive them about, for instance, their salvation. Why would a God, whose values and ambitions are so different from one’s own, be beyond deception? More generally, an untrustworthy God provides no basis for assuming any level of divine protection. Just as some theists believe life’s hardships could be blessings in disguise, seemingly good events (even salvation experiences) may in fact be part of an evil God’s plan to inflict meaningless suffering, by giving false hope. And thus the betrayer adds emotional manipulation to an already bad situation. Evaluating the behaviour and personality of others is essential for making reasonable decisions about whom to trust. So having faith in a violent, uncaring or dishonest deity while refusing to tolerate these characteristics in politicians, friends, or romantic partners, involves an unreasonable double standard. Of course, few people have faith in deities who they think lie to them or pointlessly punish them. Nevertheless, many trust in a God who could. When considering the reasonableness of particular faith commitments, we should not simply consider their scientific or logical feasibility: a strong correlation between one’s personal moral values and the divine’s is essential to having a rational theistic commitment." philosophynow.org/issues/152/Faith_and_An_Unreliable_God A pro-choice Mormon mother of six has a provocatively titled new book out that works to reframe the abortion debate as a men's issue: "'men cause all unwanted pregnancies,' [Gabrielle] Blair asserts. 'An unwanted pregnancy only happens if a man ejaculates irresponsibly—if he deposits his sperm in a vagina when he and his partner are not trying to conceive. It’s not asking a lot for men to avoid this.'" This article from Vogue highlights some of the arguments in Gabrielle Blair's Ejaculate Responsibly: A Whole New Way to Think About Abortion as well as an interview with the author: www.vogue.com/article/ejaculate-responsibly-gabrielle-blair-interview
Elon Musk bought Twitter, in part according to Musk, because of “its potential to be the platform for free speech.” Within hours of the conclusion of the deal, use of the n-word on Twitter jumped nearly 500%, an obvious challenge to moderation rules and Twitter's new limits, if any, on speech. This piece from Philosophy Now (UK) traces "free speech" from John Stuart Mill's On Liberty to the challenges posed by social media, including hate speech and anonymous postings: philosophynow.org/issues/151/Mill_Free_Speech_and_Social_Media
If one-third of a rat's brain is comprised of human brain cells, is it still a rat? (Or a ratman?) What if the majority of the rat's brain was comprised of human brain cells? What about the whole thing? Should a ratman have the same moral status as a plain rat? Who decides if ratmen should be created in the first place? These are no longer purely theoretical questions: researchers have successfully implanted human brain cells into the brains of baby rats. Over time, the human cells were integrated into the baby rats' brains and eventually comprised about one-third of total brain mass. This article from MIT Technology Review looks at some of the ethical questions arising from this line of experimentation: www.technologyreview.com/2022/10/14/1061611/rats-with-human-brain-cells
Art generated by artificial intelligence is all the rage. But art-generating AI is trained up on images scraped from the internet, usually without permission or attribution. This article from MIT Technology Review looks at the problems AI-generated art is posing for actual artists: www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it
Humans have a notoriously bad track record of trying to intervene "helpfully" in natural environments. Yet today, natural environments need more help than ever before *and* humans have more tools at their disposal to intervene than ever before, from CRISPR gene edits, to sophisticated reproductive technologies, to species relocations. What could go wrong? This article considers some of the ethical questions at the frontier of conservation biology: www.nytimes.com/2022/09/16/opinion/conservation-ethics.html
If you could find out in advance that you were likely to develop a disease for which there is no cure, would you want to know? If this were a commercially available product, how should the information be contextualized for end users who may have little or no scientific background to interpret the results? Who is to blame if the technology gets the diagnosis wrong? These are just some of the questions emerging from advances in genetics and, more recently, artificial intelligence in identifying disease earlier than ever before.
In the case recently in the news, researchers have developed an AI-powered device that has a 90% accuracy rate in identifying Parkinson's disease based on listening to how a patient breathes while sleeping. Accuracy increased to 95% by analyzing breathing patterns for 12 nights. Early treatment is critical for preventing damage to the brain yet, at present, there are no blood tests or other reliable diagnostics to detect early Parkinson's. www.washingtonpost.com/technology/2022/09/02/parkinsons-disease-ai-diagnosis/ The first half of this article looks at the work of 17th century Ethiopian philosopher Zera Yacob (also transliterated Zära Yaqob), whose work is largely unknown in English even though Yacob's work preceded, paralleled, and even went beyond that of more famous European Enlightenment-era thinkers. aeon.co/essays/yacob-and-amo-africas-precursors-to-locke-hume-and-kant
This article looks at the Swedish philosophy of lagom, the idea that just enough is just right:
"There comes a point when a thing becomes too much. ... Lagom translates as 'just the right amount.' It means knowing when enough is enough, and trying to find balance and moderation rather than constantly grasping for more. Lagom is that feeling of contentment we all get when we have all that we need to make us comfortable. ... There are two separate strands to lagom. The first is a kind of social awareness that recognizes that what we do affects other people. In this, we might see lagom more as a kind of 'fair use' policy. If you take three cookies from the plate, two other people aren’t going to get one. If you hoard and grab everything you can, elbowing and cursing your way to the front of the line, then at best, that makes you a bit of an ass. At worst, it leaves others in ruin.The second strand, however, is a mental shift that finds contentment in satisfaction. Many of us have internalized the ideas that bigger means better, that a bank balance means status, and that excess means happiness. ... [Lagom is] not simply learning to 'enjoy the simple things,' but also appreciating that sometimes less really is more. Lagom is knowing that enjoying the now of what you have does not mean you need to add more of it. After all, talking to a friend over a coffee is nice. But meeting with ten friends after ten coffees does not make things better." bigthink.com/thinking/swedish-philosophy-lagom-just-enough Just in time for back to school, MIT Technology Review's podcast "In Machines We Trust" looks at how AI is being used to monitor students (and perhaps parents when they click on their child's homework) outside the classroom, with interesting questions about informed consent, bias, and the trade-offs between privacy and security. The episode "Who Watches AI Watching Students?" is available wherever you get your podcasts.
Following on its work with mouse stem cells, an Israeli biotech company is planning to start work creating embryos from human stem cells. Created without egg or sperm, the embryos are incubated in artificial wombs. The vision of the company is to use these "organized embryo entities" for possible organ and tissue transplant. Scientists can already use stem cells to create some tissues in vitro, but an embryo can make more complex organs, organs that would be resistant to rejection because they would be genetically identical to the intended recipient. Interesting science aside, this project creates a host of philosophical issues, from "what is a human?" and "what is life?" to "what are individuals allowed to do with their own cells?" and "are organs a crop that can be grown and harvested like any other?" www.technologyreview.com/2022/08/04/1056633/startup-wants-copy-you-embryo-organ-harvesting/
Classical philosophy is generally focused on providing tips on how we are to live our best life. Contemporary philosopher Avram Alpert instead argues that the unrelenting social obsession with "the best" is poison, preventing us from living a good life as individuals and preventing us from acknowledging the contribution of all of the people who toil in obscurity, including those who make society's superstars possible. Alpert argues that the real secret to a good life -- for individuals and for society as a whole -- is figuring out how to value a good-enough life. www.amazon.com/Good-Enough-Life-Avram-Alpert/dp/0691204357
|
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
March 2023
Categories
All
|