Is Google’s conversational artificial intelligence hiding a sensitive soul?

Is Google’s conversational artificial intelligence hiding a sensitive soul?

Whimsical and unsubstantiated

In the British newspaper The GuardianToby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, is of the opinion that “Lemoine’s claims about LaMDA’s sensibility are entirely fanciful“.” We may not be able to rule out that a sufficiently powerful computer will become sentient in the distant future. But it’s not something that the most serious artificial intelligence researchers or neurobiologists would consider today. “, he continues.

Asked by the New Scientist, Adrian Hilton, director of the University of Surrey’s (UK) Institute for People-Centered Artificial Intelligence, also believes that the sensitivity of AI is a claim that is not supported by facts and speaks bold assertion. I don’t believe at the moment that we really understand the mechanics behind what makes something sentient and intelligent“, Hilton tells the science journal.”There’s a lot of hype around AI, but I’m not convinced that what we’re doing with machine learning, at the moment, is really intelligence in that sense.”

Other immediate challenges

For the time being, AI is faced with other questions, than whether it will one day overtake us all, relates to the ethical issues and the safety of the models. Benoît Frénay, professor at the Faculty of Computer Science at UNamur, wonders today what we can do “so that it is safe and ethical to use.” In a meeting for a big project on “trusted” AI, he reports that the discussions relate, for example, to the use of AI in human resources: “when we are going to choose, for example, whether or not to hire a person; there, there are real ethical questions to be asked, questions of fairness, are we going to employ someone or not because of their gender, their age, their ethnic origin, and so on. These are obviously things that are unacceptable and that we, AI specialists, must make sure to prevent. If you’re in a labor market that’s racist, sexist, or whatever, an AI that’s trained on that unawares will reproduce those biases.

Possible or not, in a more or less distant future

On whether AI could ever gain consciousness, experts are divided. Some believe that it is simply impossible, because the architecture of the algorithms is not adapted to it. Others don’t see why the progress of machine learning and robotics would not one day allow consciousness to emerge in algorithms. Without knowing the future, we know that the present is already full of ethical issues. In November 2021, the National Consultative Ethics Committee for the Life Sciences, in France, published recommendations on the ethical issues of conversational agents. We will leave him this sentence, by way of conclusion: “When conversational agents use human language, the anthropomorphization of chatbots by their users is a natural tendency: the sophistication of these technologies tends to blur the perceived boundary between machines and human beings..”

Leave a Comment

Your email address will not be published.