Microsoft's nascent Bing chatbot turning testy or even threatening is likely because it essentially mimics what it learned from online conversations, analysts and academics said on Friday.

 Tales of disturbing exchanges with the (AI) chatbot—including it issuing threats and speaking of desires to steal nuclear code, create a , or to be alive—have gone viral this week.
 

"I think this is basically mimicking conversations that it's seen online," said Graham Neubig, an associate professor at Carnegie Mellon University's language technologies institute.

"So once the conversation takes a turn, it's probably going to stick in that kind of angry state, or say 'I love you' and other things like this, because all of this is stuff that's been online before."

A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context.

However, humans taking part in banter with programs naturally tend to read emotion and intent into what a chatbot says.

To read more, click here.