User data suggests we are asking ChatGPT and other generative AI bots ever more questions, related to our work and to our personal lives. But it turns out, according to the authors of a recent study published in Nature, ChatGPT not only "influence[s] users’ moral judgment," it "corrupts rather than improves its users’ moral judgment," in part by providing inconsistent moral advice. Just as many users now defer to Google Maps' judgment over their own, users of ChatGPT, according to the study, "readily follow moral advice from bots even if there are red flags warning them against it" and "underestimate how much they are influenced" by the bots. www.nature.com/articles/s41598-023-31341-0
0 Comments
For a generative AI to respond to a question requires computing power, which requires cooling, which, at present, requires water. A lot of it: an estimated liter of water for every 10-100 questions a generative AI answers. Microsoft and Google both reported their water use in 2022 increased by at least 20%. This article from the Associated Press highlights Iowa's role in the development of GPT-4: apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4
According to this article from MIT Technology Review, philosopher David Chalmers, famous for the "hard problem" of identifying the essence of consciousness, gives artificial intelligence better than one chance in five of developing consciousness in the next 10 years. "AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. ... The cerebellum, a brain region at the base of the skull that resembles a fist-size tangle of angel-hair pasta, appears to play no role in conscious experience, though it is crucial for subconscious motor tasks like riding a bike; on the other hand, feedback connections—for example, connections running from the “higher,” cognitive regions of the brain to those involved in more basic sensory processing—seem essential to consciousness. (This, by the way, is one good reason to doubt the consciousness of LLMs: they lack substantial feedback connections.) ... Every expert has a preferred theory of consciousness, but none treats it as ideology—all of them are eternally alert to the possibility that they have backed the wrong horse. In the past five years, consciousness scientists have started working together on a series of “adversarial collaborations,” in which supporters of different theories come together to design neuroscience experiments that could help test them against each other. The researchers agree ahead of time on which patterns of results will support which theory. Then they run the experiments and see what happens. ... In effect, this strategy recognizes that the major theories of consciousness have some chance of turning out to be true—and so if more theories agree that an AI is conscious, it is more likely to actually be conscious. By the same token, a system that lacks all those markers can only be conscious if our current theories are very wrong."
www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum Content developers are figuring out how to “poison” generative AI models that scrape their content from the internet without permission. This article from MIT Technology Review, for example, discusses a software program artists can run their images through before uploading them to the web. The software embeds invisible pixels in the images that act as a poison pill, causing AI tools that “digest” them to malfunction. The developer “admits there is a risk that people might abuse the data poisoning technique for malicious uses. However, he says attackers would need thousands of poisoned samples to inflict real damage on larger, more powerful models, as they are trained on billions of data samples. ‘We don’t yet know of robust defenses against these attacks. We haven’t yet seen poisoning attacks on modern [machine learning] models in the wild, but it could be just a matter of time,’ says Vitaly Shmatikov, a professor at Cornell University who studies AI model security and was not involved in the research.”
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai The northern Japanese island of Hokkaido is home to both black bears and bigger, more assertive brown bears known as Ussuri bears. With declining birthrates and more people moving to cities, rural areas of Hokkaido are increasingly sparsely populated, and with fewer people, bears are moving in. To deter bears from coming into populated areas, the community of Takikawa is supplementing fencing and dogs with giant robot wolves. geographical.co.uk/news/robot-wolf-keeps-bears-away
Are the humanities being cast aside just when we need them most? New York Times columnist Maureen Dowd makes this argument:
"Trustees at Marymount University in Virginia voted unanimously in February to phase out majors such as English, history, art, philosophy and sociology. How can students focus on slowly unspooling novels when they have disappeared inside the kinetic world of their phones, lured by wacky videos and filtered FOMO photos? Why should they delve into hermeneutics and epistemology when they can simply exchange flippant, shorthand tweets and texts? In a world where brevity is the soul of social media, what practical use can come from all that voluminous, ponderous reading? ... Strangely enough, the humanities are faltering just at the moment when we’ve never needed them more. Americans are starting to wrestle with colossal and dangerous issues about technology, as A.I. begins to take over the world. ... 'There is no time in our history in which the humanities, philosophy, ethics and art are more urgently necessary than in this time of technology’s triumph,' said Leon Wieseltier, the editor of Liberties, a humanistic journal. 'Because we need to be able to think in nontechnological terms if we’re going to figure out the good and the evil in all the technological innovations. Given society’s craven worship of technology, are we going to trust the engineers and the capitalists to tell us what is right and wrong?'" www.nytimes.com/2023/05/27/opinion/english-humanities-ai.html As more aspects of our lives -- and in this case our books -- are digitized and made available electronically, it is useful to remember that, legally at least, we do not own that digitized content. We are merely licensing it, which allows the service that provides it to change it at will. Just as no one will ask you to approve changes to your email interface, you have no say if your digital "friend" is altered or the words in your ebooks. www.nytimes.com/2023/04/04/arts/dahl-christie-stine-kindle-edited.html
Previously, discussion of machine intelligence/consciousness has come at the issue from the perspective of an artificial being becoming intelligent/conscious. This piece by two professors at Peking University in Philosophy Now (UK) invokes the famous Ship of Theseus paradox (and the sorites paradox) to come at the issue from the other way: an intelligent being becoming a machine. As humans adopt more technological enhancements to their biology -- including neural enhancements, integrations, and even replacements -- at some point a human may become if not a machine at least more artificial than natural, which presumably would yield a conscious, intelligent artificial being. philosophynow.org/issues/155/Can_Machines_Be_Conscious
If a large language model, like GPT-4, is trained on the writings of a famous philosopher, would you be able to have a conversation with that philosopher? Would that revolutionize philosophy? Or would it vitiate philosophy by inserting AI-generated extrapolations? Here's an "interview" with Rene Descartes, with answers generated by GPT-4: jimmyalfonsolicon.substack.com/p/interviews-with-gtp-philosophers
The rise of humans has unfolded in a very specific niche of physical geography -- a "just right" combination of temperatures, precipitation, continental positions, atmospheric chemistry, and existing organisms. Could we take this show on the road even if we wanted to? This piece from Aeon argues that humans will not be able to live off Earth for sustained periods of time because of differences in the underlying physical geography, including biogeography..
"Given all our technological advances, it’s tempting to believe we are approaching an age of interplanetary colonisation. But can we really leave Earth and all our worries behind? No. ... What Earth-like means in astronomy textbooks and what it means to someone considering their survival prospects on a distant world are two vastly different things. We don’t just need a planet roughly the same size and temperature as Earth; we need a planet that spent billions of years evolving with us. We depend completely on the billions of other living organisms that make up Earth’s biosphere. Without them, we cannot survive. ... In fact, we would have been unable to survive on Earth for around 90 per cent of its history; the oxygen-rich atmosphere that we depend on is a recent feature of our planet. ... The only reason we find Earth habitable now is because of the vast and diverse biosphere that has for hundreds of millions of years evolved with and shaped our planet into the home we know today. ... We are complex lifeforms with complex needs. We are entirely dependent on other organisms for all our food and the very air we breathe. ... The only reason we find Earth habitable now is because of the vast and diverse biosphere that has for hundreds of millions of years evolved with and shaped our planet into the home we know today. Our continued survival depends on the continuation of Earth’s present state without any nasty bumps along the way. We are complex lifeforms with complex needs. We are entirely dependent on other organisms for all our food and the very air we breathe." aeon.co/essays/we-will-never-be-able-to-live-on-another-planet-heres-why What is "cognitive liberty"? This interview with Duke University professor Nita Farahany about her new book The Battle for Your Brain lays out her argument that what and how we think should be protected from brain-monitoring technology: www.wwno.org/npr-news/npr-news/2023-03-14/this-law-and-philosophy-professor-warns-neurotechnology-is-also-a-danger-to-privacy
Learning to lie is considered an important milestone in child development. Voluntarily restricting one's own lying is considered an important milestone in moral development. Both of those make the recent news about OpenAI's new GPT-4 lying to trick a human into completing a task it could not -- a task designed to block a machine from proceeding -- rich fodder for philosophical discussion. Is this a sign of increasing machine intelligence (and is that good or bad)? How does one embed moral code (and whose moral code) in machine learning? Should this line of experimentation proceed -- and is it even realistic to suggest a halt at this point? www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016
In 1950, computer science pioneer Alan Turing proposed a conversation-based test to determine if a machine is intelligent or at least exhibiting intelligent behavior similar to a human's. For decades, absent any declared winner of the Turing test, philosophers have debated whether or not a machine that could pass the Turing test would truly be "intelligent" or just simulating intelligence. Recent advances in AI-generated conversation have made this discussion less theoretical, and more ethically murky, because it is increasingly clear -- intelligent or not -- AI bots, which have been trained up on human conversation patterns, can now carry on increasingly sophisticated conversations with humans, to the point of building relationships with, even manipulating, the humans with whom they interact. This astonishing transcript of a recent two-hour conversation between a New York Times reporter and Microsoft's new ChatGPT-powered bot named Sydney is just one case in point: www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
Would you want to know which day you were going to die? That's the premise of a new science fiction book The Measure in which people all over the world are delivered a box that contains information about how much longer they have left to live. (Would you open the box?) It's also the premise of the website www.death-clock.org. (Will you click on that link?) Science is still not particularly good at making these estimates, which might give us some psychological wiggle room regardless of what Death Clock comes up with, but what will happen as medical estimates improve? Do you want to know when you will die? If you do, would you want the day or just a range? How might the information change the way you live?
Are humans worth it? The human population of earth has doubled in the last 50 years to 8 billion while wildlife populations have declined 70%. Les Knight argues that humans should voluntarily work towards their own extinction. (This is not an isolated idea: humans working toward the extinction of the species have also popped up as a subplot in the last two science fiction books I have read.) www.nytimes.com/2022/11/23/climate/voluntary-human-extinction.html
Humans have a notoriously bad track record of trying to intervene "helpfully" in natural environments. Yet today, natural environments need more help than ever before *and* humans have more tools at their disposal to intervene than ever before, from CRISPR gene edits, to sophisticated reproductive technologies, to species relocations. What could go wrong? This article considers some of the ethical questions at the frontier of conservation biology: www.nytimes.com/2022/09/16/opinion/conservation-ethics.html
The summer's drought and war have focused attention on agricultural vulnerabilities and food insecurity. This article from The New York Times profiles the work of crop scientists who are tinkering with plant genomes to improve harvests: www.nytimes.com/2022/08/18/climate/gmo-food-soybean-photosynthesis.html
Following on its work with mouse stem cells, an Israeli biotech company is planning to start work creating embryos from human stem cells. Created without egg or sperm, the embryos are incubated in artificial wombs. The vision of the company is to use these "organized embryo entities" for possible organ and tissue transplant. Scientists can already use stem cells to create some tissues in vitro, but an embryo can make more complex organs, organs that would be resistant to rejection because they would be genetically identical to the intended recipient. Interesting science aside, this project creates a host of philosophical issues, from "what is a human?" and "what is life?" to "what are individuals allowed to do with their own cells?" and "are organs a crop that can be grown and harvested like any other?" www.technologyreview.com/2022/08/04/1056633/startup-wants-copy-you-embryo-organ-harvesting/
Reuters (UK) is reporting that Amazon is working on technology that would allow Alexa to mimic anyone's voice based on an audio sample of a minute or less. This technology is being pitched as a way to capture loved one's voices but, like deep-fake videos, raises epistemological issues ("what do we actually know when our usual sensory input can be deceived?") as well as privacy issues concerning ownership of our images and voices. www.reuters.com/technology/amazon-has-plan-make-alexa-mimic-anyones-voice-2022-06-22
When lives are on the line, who should be making decisions: artificial intelligence, with its lightning-fast ability to weigh options, or humans? This question is ever-less theoretical, with AI being built into health care systems, criminal justice systems, and, increasingly, weapons systems. The Pentagon's Defense Advanced Research Projects Agency (DARPA), for example, recently launched its "In the Moment" program, designed to develop defense technology that pairs AI with expert systems to "build trusted algorithmic decision-makers for mission-critical Department of Defense (DoD) operations." www.washingtonpost.com/technology/2022/03/29/darpa-artificial-intelligence-battlefield-medical-decisions/ (Quote from www.darpa.mil/news-events/2022-03-03.)
Because I teach science fiction too, I always encourage my philosophy students to consider the darker applications of philosophical thought experiments -- like the brain in the vat -- as well. This proposed alternative to capital punishment is a bold, if creepy, application of philosophy's brain in the vat.
"Many people born into liberal democracies find corporal or capital punishment distasteful. We live in an age which says there are only three humane, acceptable ways to punish someone: give them a fine, force them to do “community service,” or lock them up. But why do we need to accept such a small, restrictive range of options? Perhaps, as Christopher Belshaw argues in the Journal of Controversial Ideas, it’s time to consider some radical alternatives. To punish someone is to do them harm, and sometimes, great harm indeed. As Belshaw writes, it’s to “harm them in such a way that they understand harm is being done in return for what, at least allegedly, they did.” Justice assumes some kind of connection between a crime and the punishment, or between the victim and the criminal. This makes punishment, in the main, retributive — a kind of payback for a wrong that someone has committed. ... Belshaw’s article hinges on the idea that the prison system is not fit for purpose. First, there’s the question of whether prison actually harms a criminal in the way we want. In some cases, it might succeed only in “rendering them for a period inoperative.” ... Second, and on the other hand, a bad prison sentence might cause more harm than is strictly proportional. A convict might suffer unforeseen abuse at the hands of guards or other inmates. ... Third, and especially concerning decades-long sentences, there’s a question about who prison is punishing. ... When we punish an old, memory-addled person convicted 40 years previously, are we really punishing the same person? ... Well, one option is to put criminals into a deep and reversible coma. One of the biggest problems with capital punishment is that it is irreversible. So long as there’s even a single case of a mistaken conviction, wrongfully killing someone is an egregious miscarriage of justice. But what if the criminal could always be brought back to consciousness? ... Putting someone in a coma essentially “freezes” a person’s identity. They wake up with much the same mental life as they did when they went into a coma. As such, it avoids the issues of punishing a changing person, decades later. A convict will wake up, years off their life, but can still appreciate the connection between the punishment and the crime they committed.But the biggest advantage a reversible coma has over prison, is that it’s standardized form of punishment. It’s a clear measurement of harm (i.e. a denial of x amount of years from your life) and is not open to the variables of greater and lesser harm in a prison environment. Essentially, putting prisoners in a coma establishes “years of life” as an acceptable and measurable payment for a wrong done. ... Even if you find the idea of induced comas as unspeakably horrible, Belshaw does at least leave us with a good question. Why do we assume that only one kind of punishment is the best? With science, technology, and societal values moving on all the time, might it be time to reconsider and re-examine how we ensure justice?" bigthink.com/thinking/comas-for-convicts As noted in this recent article from The New York Times, "with advances in artificial intelligence and robotics allowing for more profound interactions with the inanimate" the number of people who form deep attachments to artificial "lifeforms" is likely to increase over the coming years. In Japan, thousands of people have entered into unofficial marriages with fictional characters. Are the feelings of these "fictosexuals," as those in the movement call themselves, any less real? Do these relationships serve a social need? And what are the ethical obligations, if any, of the corporate entities that own and promote (and program and update and potentially discontinue) these AI-powered objects of devotion? www.nytimes.com/2022/04/24/business/akihiko-kondo-fictional-character-relationships.html
A recent Pew Research Center study asked Americans what they think of artificial intelligence and "human enhancement." The survey asked about "six developments that are widely discussed among futurists, ethicists and policy advocates. Three are part of the burgeoning array of AI applications: the use of facial recognition technology by police, the use of algorithms by social media companies to find false information on their sites and the development of driverless passenger vehicles. The other three, often described as types of human enhancements, revolve around developments tied to the convergence of AI, biotechnology, nanotechnology and other fields. They raise the possibility of dramatic changes to human abilities in the future: computer chip implants in the brain to advance people’s cognitive skills, gene editing to greatly reduce a baby’s risk of developing serious diseases or health conditions, and robotic exoskeletons with a built-in AI system to greatly increase strength for lifting in manual labor jobs." The results are summarized here: www.pewresearch.org/internet/wp-content/uploads/sites/9/2022/03/PS_2022.03.17_ai-he_00-01.png (from www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns)
Is it okay to be mean to someone if the other "person" isn't human?
"The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endless series of provocative headlines. Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online. ... Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships. ... "Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms. ... In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once. On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic. ... “There are a lot of studies being done… about how a lot of these chatbots are female and [have] feminine voices, feminine names,” [AI ethicist and consultant Olivia] Gambelin said. Some academic work has noted how passive, female-coded bot responses encourage misogynistic or verbally abusive users. ... "But what to think of the people that brutalize these innocent bits of code? For now, not much. As AI continues to lack sentience, the most tangible harm being done is to human sensibilities. But there’s no doubt that chatbot abuse means something. ... And although humans don’t need to worry about robots taking revenge just yet, it’s worth wondering why mistreating them is already so prevalent." futurism.com/chatbot-abuse The Smithsonian's Arts and Industries Building has re-opened after a long renovation and has a new exhibit: FUTURES. "Part exhibition, part festival, FUTURES presents nearly 32,000 square feet of new immersive site-specific art installations, interactives, working experiments, inventions, speculative designs, and 'artifacts of the future,' as well as historic objects and discoveries from 23 of the Smithsonian’s museums, major initiatives, and research centers." FUTURES is only open through July 6, 2022. aib.si.edu/futures
|
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
Archives
December 2023
Categories
All
|