In 1995, David Chalmers, then a newly minted philosophy PhD in his late 20s, published an influential article laying out what he termed the "hard problem" of consciousness: what, exactly, is consciousness and from where does the sensation of consciousness arise? A few years later, in 1998, neuroscientist Christof Koch, today chief scientist of the Allen Institute for Brain Science in Seattle, bet Chalmers that in 25 years the "hard problem" would be solved, that scientists would understand the underpinnings of consciousness. On Friday, at the Association for the Scientific Study of Consciousness meeting in New York City, Koch and Chalmers both declared Chalmers the winner of the bet. www.nature.com/articles/d41586-023-02120-8
0 Comments
Morality is in decline. Everyone says so. But is it really? According to a paper published in Nature recently, survey results done in the U.S. and 59 other countries show that respondents have been reporting a decline in morality for at least 70 years. Survey data shows this perception has been shared by people of various political ideologies, races, ages, genders, and educational levels. But when researchers also analyzed decades of surveys asking people to assess their contemporaries' morality, there was no difference across time, suggesting that the perception of moral decline is a persistent illusion and, researchers found, one that can be manipulated. www.nature.com/articles/s41586-023-06137-x
Are the humanities being cast aside just when we need them most? New York Times columnist Maureen Dowd makes this argument:
"Trustees at Marymount University in Virginia voted unanimously in February to phase out majors such as English, history, art, philosophy and sociology. How can students focus on slowly unspooling novels when they have disappeared inside the kinetic world of their phones, lured by wacky videos and filtered FOMO photos? Why should they delve into hermeneutics and epistemology when they can simply exchange flippant, shorthand tweets and texts? In a world where brevity is the soul of social media, what practical use can come from all that voluminous, ponderous reading? ... Strangely enough, the humanities are faltering just at the moment when we’ve never needed them more. Americans are starting to wrestle with colossal and dangerous issues about technology, as A.I. begins to take over the world. ... 'There is no time in our history in which the humanities, philosophy, ethics and art are more urgently necessary than in this time of technology’s triumph,' said Leon Wieseltier, the editor of Liberties, a humanistic journal. 'Because we need to be able to think in nontechnological terms if we’re going to figure out the good and the evil in all the technological innovations. Given society’s craven worship of technology, are we going to trust the engineers and the capitalists to tell us what is right and wrong?'" www.nytimes.com/2023/05/27/opinion/english-humanities-ai.html Until the 1840s, vegetarians were referred to as "Pythagoreans" because, in addition to thinking about right angles, the pre-Socratic Greek philosopher Pythagoras was associated with a school of Greek philosophers that shunned meat eating, in part because Pythagoras believed in the transmigration of the soul, including the migration of the soul between people and animals. www.psychologytoday.com/us/blog/hide-and-seek/202305/how-vegetarianism-was-born-out-of-philosophy-and-mysticism
As more aspects of our lives -- and in this case our books -- are digitized and made available electronically, it is useful to remember that, legally at least, we do not own that digitized content. We are merely licensing it, which allows the service that provides it to change it at will. Just as no one will ask you to approve changes to your email interface, you have no say if your digital "friend" is altered or the words in your ebooks. www.nytimes.com/2023/04/04/arts/dahl-christie-stine-kindle-edited.html
Previously, discussion of machine intelligence/consciousness has come at the issue from the perspective of an artificial being becoming intelligent/conscious. This piece by two professors at Peking University in Philosophy Now (UK) invokes the famous Ship of Theseus paradox (and the sorites paradox) to come at the issue from the other way: an intelligent being becoming a machine. As humans adopt more technological enhancements to their biology -- including neural enhancements, integrations, and even replacements -- at some point a human may become if not a machine at least more artificial than natural, which presumably would yield a conscious, intelligent artificial being. philosophynow.org/issues/155/Can_Machines_Be_Conscious
According to a law professor at the University of Dayton, growing epistemic pluralism – wide-ranging views on empirical facts – and disagreements over epistemic dependence – who constitutes a trusted source of information – are contributing to polarization in American political life.
"Without the government or an official church telling people what to think, we all have to decide for ourselves – and that inevitably leads to a diversity of moral viewpoints. ... [T]he same is true of beliefs about matters of fact. In the U.S., legal rules and social norms attempt to ensure that the state cannot constrain an individual’s freedom of belief, whether that be about moral values or empirical facts. This intellectual freedom contributes to epistemic pluralism. ... Another contributor to epistemic pluralism is just how specialized human knowledge has become. No one person could hope to acquire the sum total of all knowledge in a single lifetime. This brings us to the second relevant concept: epistemic dependence. Knowledge is almost never acquired firsthand, but transmitted by some trusted source. To take a simple example, how do you know who the first president of the United States was? No one alive today witnessed the first presidential inauguration. You could go to the National Archives and ask to see records, but hardly anyone does that. Instead, Americans learned from an elementary school teacher that George Washington was the first president, and we accept that fact because of the teacher’s epistemic authority. There’s nothing wrong with this; everyone gets most knowledge that way. There’s simply too much knowledge for anyone to verify independently all the facts on which we routinely rely. ... However, this raises a tricky problem: Who has sufficient epistemic authority to qualify as an expert on a particular topic? Much of the erosion of our shared reality in recent years seems to be driven by disagreement about whom to believe." theconversation.com/why-cant-americans-agree-on-well-nearly-anything-philosophy-has-some-answers-193055 If a large language model, like GPT-4, is trained on the writings of a famous philosopher, would you be able to have a conversation with that philosopher? Would that revolutionize philosophy? Or would it vitiate philosophy by inserting AI-generated extrapolations? Here's an "interview" with Rene Descartes, with answers generated by GPT-4: jimmyalfonsolicon.substack.com/p/interviews-with-gtp-philosophers
Today marks the 78th anniversary of Dietrich Bonhoeffer's execution by the Nazis. Bonhoeffer was a young German theologian and moral philosopher who, days after Adolf Hilter's 1933 installation as German chancellor, delivered a radio address warning Germans that the idolatrous worship of the Führer (leader) may instead be a cult of the Verführer (misleader, or seducer). Ten years later, from prison, Bonhoeffer penned a letter reflecting on lessons learned over the prior decade, noting that "memory, the recalling of lessons we have learnt, is also part of responsible living." Included in Bonhoeffer's 1943 "reckoning" is this famous passage:
"Folly is a more dangerous enemy to the good than evil. One can protest against evil; it can be unmasked and, if need be, prevented by force. Evil always carries the seeds of its own destruction, as it make people, at the least, uncomfortable. Against folly we have no defence. Neither protests nor force can touch it; reasoning is no use; facts that contradict personal prejudices can simply be disbelieved, they can just be pushed aside as trivial exceptions. So the fool, as distinct from the scoundrel, is completely self-satisfied; in fact, he can easily become dangerous, as it does not take much to make him aggressive. A fool must therefore be treated more cautiously than a scoundrel; we shall never again try to convince a fool by reason, for it is both useless and dangerous. ... If we look more closely, we see that any violent display of power, whether political or religious, produces an outburst of folly in a large part of mankind; indeed, this seems actually to be a psychological and sociological law: the power of some needs the folly of others. ... The fact that the fool is often stubborn must not mislead us into thinking that he is independent. One feels in fact, when talking to him, that one is dealing, not with the man himself, but with slogans, catchwords, and the like, which have taken hold of him. He is under a spell, he is blinded, his very nature is being misused and exploited. Having this become a passive instrument, the fool will be capable of any evil and at the same time incapable of seeing that it is evil." from "After Ten Years: A Reckoning Made at New Year 1943," in Letters & Papers from Prison (ed. Eberhard Bethge) What is "cognitive liberty"? This interview with Duke University professor Nita Farahany about her new book The Battle for Your Brain lays out her argument that what and how we think should be protected from brain-monitoring technology: www.wwno.org/npr-news/npr-news/2023-03-14/this-law-and-philosophy-professor-warns-neurotechnology-is-also-a-danger-to-privacy
Learning to lie is considered an important milestone in child development. Voluntarily restricting one's own lying is considered an important milestone in moral development. Both of those make the recent news about OpenAI's new GPT-4 lying to trick a human into completing a task it could not -- a task designed to block a machine from proceeding -- rich fodder for philosophical discussion. Is this a sign of increasing machine intelligence (and is that good or bad)? How does one embed moral code (and whose moral code) in machine learning? Should this line of experimentation proceed -- and is it even realistic to suggest a halt at this point? www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016
This article from The Atlantic makes the argument that policing language not only removes the emotional power of language, it imagines it has done people a service by changing words instead of taking the necessary action to change lives:
"Equity-language guides are proliferating among some of the country’s leading institutions, particularly nonprofits. The American Cancer Society has one. So do the American Heart Association, the American Psychological Association, the American Medical Association, the National Recreation and Park Association, the Columbia University School of Professional Studies, and the University of Washington. The words these guides recommend or reject are sometimes exactly the same, justified in nearly identical language. This is because most of the guides draw on the same sources from activist organizations: A Progressive’s Style Guide, the Racial Equity Tools glossary, and a couple of others. The guides also cite one another. The total number of people behind this project of linguistic purification is relatively small, but their power is potentially immense. The new language might not stick in broad swaths of American society, but it already influences highly educated precincts, spreading from the authorities that establish it and the organizations that adopt it to mainstream publications, such as this one. Although the guides refer to language “evolving,” these changes are a revolution from above. They haven’t emerged organically from the shifting linguistic habits of large numbers of people. They are handed down in communiqués written by obscure “experts” who purport to speak for vaguely defined “communities,” remaining unanswerable to a public that’s being morally coerced. A new term wins an argument without having to debate. ... "The whole tendency of equity language is to blur the contours of hard, often unpleasant facts. This aversion to reality is its main appeal. Once you acquire the vocabulary, it’s actually easier to say people with limited financial resources than the poor. The first rolls off your tongue without interruption, leaves no aftertaste, arouses no emotion. The second is rudely blunt and bitter, and it might make someone angry or sad. Imprecise language is less likely to offend. Good writing—vivid imagery, strong statements—will hurt, because it’s bound to convey painful truths. ... "The battle against euphemism and cliché is long-standing and, mostly, a losing one. What’s new and perhaps more threatening about equity language is the special kind of pressure it brings to bear. The conformity it demands isn’t just bureaucratic; it’s moral. But assembling preapproved phrases from a handbook into sentences that sound like an algorithmic catechism has no moral value. Moral language comes from the struggle of an individual mind to absorb and convey the truth as faithfully as possible. ... "The rationale for equity-language guides is hard to fault. They seek a world without oppression and injustice. Because achieving this goal is beyond anyone’s power, they turn to what can be controlled and try to purge language until it leaves no one out and can’t harm those who already suffer. ... This huge expense of energy to purify language reveals a weakened belief in more material forms of progress. If we don’t know how to end racism, we can at least call it structural. The guides want to make the ugliness of our society disappear by linguistic fiat. ... The project of the guides is utopian, but they’re a symptom of deep pessimism. They belong to a fractured culture in which symbolic gestures are preferable to concrete actions...." www.theatlantic.com/magazine/archive/2023/04/equity-language-guides-sierra-club-banned-words/673085 In 1950, computer science pioneer Alan Turing proposed a conversation-based test to determine if a machine is intelligent or at least exhibiting intelligent behavior similar to a human's. For decades, absent any declared winner of the Turing test, philosophers have debated whether or not a machine that could pass the Turing test would truly be "intelligent" or just simulating intelligence. Recent advances in AI-generated conversation have made this discussion less theoretical, and more ethically murky, because it is increasingly clear -- intelligent or not -- AI bots, which have been trained up on human conversation patterns, can now carry on increasingly sophisticated conversations with humans, to the point of building relationships with, even manipulating, the humans with whom they interact. This astonishing transcript of a recent two-hour conversation between a New York Times reporter and Microsoft's new ChatGPT-powered bot named Sydney is just one case in point: www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
Practice your logical deduction skills with the old-fashioned code-breaking game Bulls and Cows. The game is like Mastermind except there can be no duplicates: www.mathsisfun.com/games/bulls-and-cows.html
Do the old have a moral obligation to move out of the way to make room for the young? Or is even suggesting this perpetuating dangerous bias against the elderly and vulnerable? These may not be theoretical questions much longer in rapidly aging societies. A case in the point is the traction the provocative statements of Yale economist Yusuke Narita have gotten in Japan, where those 65 and older make up roughly 30% of the population and those 80 and older account for 10% of the population. www.nytimes.com/2023/02/12/world/asia/japan-elderly-mass-suicide.html
Would you want to know which day you were going to die? That's the premise of a new science fiction book The Measure in which people all over the world are delivered a box that contains information about how much longer they have left to live. (Would you open the box?) It's also the premise of the website www.death-clock.org. (Will you click on that link?) Science is still not particularly good at making these estimates, which might give us some psychological wiggle room regardless of what Death Clock comes up with, but what will happen as medical estimates improve? Do you want to know when you will die? If you do, would you want the day or just a range? How might the information change the way you live?
When can a promise be changed in the face of altered circumstances? That is the crux of the issue behind major strikes in France this week over the government's proposal to raise the retirement age by 2 years, from 62 to 64, in 2030. According to Stanford's Center for Longevity, half of today's 5-year-olds can expect to live to age 100 -- and, according to the Center for Longevity, we are not ready. Economic impacts are among the most obvious, from personal savings to growth-centric economic models to pension policies. For example, when the forerunner of France's pension system was established in the 1940s, life expectancy in France was less than 60; today French life expectancy is 82, and government spending on pensions comes to slightly more than 14% of GDP. But our attitudes towards aging, purpose, caregiving, the elderly, promise-keeping, even longevity itself may all be revisited in the years ahead.
This article explores the ethics of organ sales and invites readers to consider the extent to which humans are just sophisticated machines and, like other sophisticated machines -- say, your car -- occasionally require replacement parts that, perhaps, the marketplace should supply. Iran's organ-matching nonprofit is a case study. www.wired.com/story/kidney-donor-compensation-market
The University of Texas at Austin's Center for Media Engagement houses a substantial collection of case studies in media ethics that might make for interesting discussion or co-op use: mediaengagement.org/vertical/media-ethics/
One of the questions philosophers, and more recently neuroscientists, have been struggling with for ages is what does it mean to have free will and do we have it? This short video interview with a neuroscientist makes the interesting argument that our conscious selves may not have free will but our unconscious minds might: www.youtube.com/watch?v=wha-BQTu3_4
Ethics aside, what, if anything, does philosophy have to do with the collapse of the cryptocurrency exchange FTX? This opinion piece ties Silicon Valley's favorite philanthropic philosophy, "effective altruism," to Sam Bankman-Fried's approach to FTX:
"Effective altruists claim they strive to use reason and evidence to do the most good possible for the most people. Influenced by utilitarian ethics, they’re fond of crunching numbers to determine sometimes counterintuitive ideas for maximizing a philanthropic act’s effects by focusing on “expected value,” which they believe can be calculated by multiplying the value of an outcome by the probability of it occurring.SBF belonged to the “longtermist” sect of effective altruism, which focuses on events that could pose a long-term existential threat to humanity, like pandemics or the rise of runaway artificial intelligence. The reasoning for this focus is that more people will exist in the future than exist today, and thus the potential to do more good for more people is greater. He also adopted one of the movement’s signature strategies for effecting social change called “earning to give,” in which generating high income is more important than what kind of job one takes, because it enables people to give away more money for philanthropy. As a college student, SBF had lunch with William MacAskill, the most prominent intellectual advocate for effective altruism in the world, and then reportedly went into finance, and then crypto, based on the idea that it would allow him to donate more money. ... "In online conversation with the reporter, SBF referred to his bids in the past to appear regulator-friendly as 'just PR,' and he disavowed some of his previous statements about ethics. There’s some ambiguity in this part and other parts of SBF’s exchange with the reporter, but broadly speaking, one can interpret the meaning of his responses in two ways. The first possibility is that he’s confessing that his entire set of ethical commitments — including effective altruism — is a ruse. ... The second possibility is that he’s saying that he’s extremely committed to effective altruism, and that he would be willing to do anything — including unsavory things — in order to get to what he saw as the greatest good. ... Remarkably, both scenarios are plausible — and damning. ... While these two scenarios reflect different outlooks on the world, both expose something alarming about effective altruism. It is a belief system that bad faith actors can hijack with tremendous ease and one that can lead true believers to horrifying ends-justify-the-means extremist logic. The core reason that effective altruism is a natural vehicle for bad behavior is that its cardinal demands do not require adherents to shun systems of exploitation or to change them; instead, it incentivizes turbo-charging them. ... The value proposition of this community is to think of morality through the prism of investment, using expected value calculations and cost-effectiveness criteria to funnel as much money as possible toward the endpoint of perceived good causes. It's an outlook that breeds a bizarre blend of elitism, insularity and apathy to root causes of problems. This is a movement that encourages quant-focused intellectual snobbery and a distaste for people who are skeptical of suspending moral intuition and considerations of the real world. ... This is a movement in which promising young people are talked out of pursuing government jobs and talked into lucrative private sector jobs because of the importance of 'earning to give.'" www.msnbc.com/opinion/msnbc-opinion/ftx-sbf-effective-altruism-bankman-fried-rcna59172 Are humans worth it? The human population of earth has doubled in the last 50 years to 8 billion while wildlife populations have declined 70%. Les Knight argues that humans should voluntarily work towards their own extinction. (This is not an isolated idea: humans working toward the extinction of the species have also popped up as a subplot in the last two science fiction books I have read.) www.nytimes.com/2022/11/23/climate/voluntary-human-extinction.html
My "Philosophically Speaking" class is designed for teens, but the PLATO program affiliated with the University of Washington is offering Zoom philosophy classes for children ages 8-12 this winter. For more information, see www.plato-philosophy.org/philosophy-for-children-and-youth/?program=zoom-philosophy-classes
The current issue of Philosophy Now (UK) is on the relationship between philosophy and God, from the perspective of both humanists and theists, and includes this thought-provoking essay on the rationality of believing in a god who may not share your values:
"While ‘faith’ is commonly defined by atheists as ‘belief without evidence’, in practice, someone having faith in someone or something implies more than mere intellectual assent, either with or without evidence. Few Christians, Muslims, or Jews would claim to ‘have faith in’ Satan, despite many believing that something called Satan exists. So ‘having faith (in)’ suggests an endorsement of and commitment to a person, idea, or institution. Similarly, the act of ‘trusting’ goes beyond simple affirmation of existence. The entrustor chooses to live as if the entrusted will not betray them. For the theist, ‘faith’ and ‘trust’ are virtual synonyms. ... Having faith in an untrustworthy person or thing is not so uncommon: people often choose to put their faith in romantic partners who repeatedly let them down. Nor is it unheard of for voters to have faith in politicians commonly acknowledged to be corrupt, even by them. However, in both cases, the morality and rationality of maintaining these faith positions are easily criticised. Religious faith, on the other hand, is often given a free pass. Critiquing the claims made by religions and objecting to portrayals of God are common; but questioning the rationality of having faith in an untrustworthy God even if that God turns out to be real is less common: 'My God might look like a monster – a violent bully who once demanded racial cleansing and who allows great suffering in the world; but if he or she is real, you had better follow him or her' – or so the argument goes. ... Setting aside the fact that many competing groups claim their God punishes those who are not loyal to their specific religion, a person who decides to follow one particular frightening and morally incomprehensible deity still has little reason to trust that this God would not deceive them about, for instance, their salvation. Why would a God, whose values and ambitions are so different from one’s own, be beyond deception? More generally, an untrustworthy God provides no basis for assuming any level of divine protection. Just as some theists believe life’s hardships could be blessings in disguise, seemingly good events (even salvation experiences) may in fact be part of an evil God’s plan to inflict meaningless suffering, by giving false hope. And thus the betrayer adds emotional manipulation to an already bad situation. Evaluating the behaviour and personality of others is essential for making reasonable decisions about whom to trust. So having faith in a violent, uncaring or dishonest deity while refusing to tolerate these characteristics in politicians, friends, or romantic partners, involves an unreasonable double standard. Of course, few people have faith in deities who they think lie to them or pointlessly punish them. Nevertheless, many trust in a God who could. When considering the reasonableness of particular faith commitments, we should not simply consider their scientific or logical feasibility: a strong correlation between one’s personal moral values and the divine’s is essential to having a rational theistic commitment." philosophynow.org/issues/152/Faith_and_An_Unreliable_God A pro-choice Mormon mother of six has a provocatively titled new book out that works to reframe the abortion debate as a men's issue: "'men cause all unwanted pregnancies,' [Gabrielle] Blair asserts. 'An unwanted pregnancy only happens if a man ejaculates irresponsibly—if he deposits his sperm in a vagina when he and his partner are not trying to conceive. It’s not asking a lot for men to avoid this.'" This article from Vogue highlights some of the arguments in Gabrielle Blair's Ejaculate Responsibly: A Whole New Way to Think About Abortion as well as an interview with the author: www.vogue.com/article/ejaculate-responsibly-gabrielle-blair-interview
|
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
December 2023
Categories
All
|