Conversational AI software, which is trained on enormous amounts of data, are able to carry on realistic conversations with humans.Â
Recently, Microsoft enhanced its Bing search engine with an AI that has had some unsettling interactions with people.Â
The threat isn’t that conversational AI can be weird; the threat is that it can manipulate users without their knowledge for financial, political, or even criminal reasons.
As a researcher of human-computer systems for over 30 years, I believe this is a positive step forward, as natural language is one of the most effective ways for people and machines to interact.
On the other hand, conversational AI will unleash significant dangers that need to be addressed.
I’m not talking about the obvious risk that unsuspecting consumers may trust the output of chatbots that were trained on data riddled with errors and biases.
While that is a genuine problem, it almost certainly will be solved as platforms get better at validating output.
I’m also not talking about the danger that chatbots could allow cheating in schools or displace workers in some white-collar jobs; they too will be resolved over time.
Instead, I’m talking about a danger that is far more nefarious — the deliberate use of conversational AI as a tool of targeted persuasion, enabling the manipulation of individual users with extreme precision and efficiency.
Full Link ( Here )
© CopyRights RawNews1st