John Searle's famous "Chinese room" thought experiment posits that machines can never truly think -- they can just execute commands without understanding. An experiment published earlier this summer seems to nibble away at these boundaries. Researchers at Facebook's Artificial Intelligence Research Lab were trying to teach bots to negotiate. www.digitalstrategyconsulting.com/intelligence/fbbotsgif.gif They supplied the bots with 5,808 examples of human negotiations to learn from. In subsequent trials, the bots not only turned out to be very successful negotiators, they exhibited two remarkable behaviors: (1) the bots unexpectedly developed their own non-human language to speed up their negotiations, and (2) the bots developed human-like behaviors in negotiation, including deceit. "[W]e find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it. Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals." Will this experiment mark the beginning of the end of Searle's "Chinese room" objections? arxiv.org/pdf/1706.05125.pdf
Leave a Reply.
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: