Microsoft’s AI – “Tay” – What went wrong

Artificial Intelligence is a field of study that is constantly growing and changing. Computer scientists around the world have been doing tons of research and study to make artificial intelligence possible and better. The long history of AI has so far shown many promises of technologies like these being possible. Companies like Microsoft have done many research on AI. As a part of their research, Microsoft created a teen AI named Tay. Tay was developed by Microsoft’s Technology and Research and Bing teams. Alba (2016) writes on his article that the company unveiled Tay, a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. But AI bot Tay could not last long on internet as it used offensive language, including pointedly racist and sexist remarks. This paper examines the AI bot created by Microsoft and what went wrong once it went online.

Tay extracts information from artificial neural network. Neural network is like a human brain which is a network of hardware and software which are already in use by companies like Microsoft, Facebook and Google. Tay used data and information from these neural network to train itself to act like a 19 year old teenager. Also, Tay was designed to learn and adapt what people on internet tell it. Alba (2016) writes on his article that the more you chat with Tay the smarter she gets, so the experience can be more personalized for you. Tay is a wonderful chat bot that could teach itself from neural networks. And being able to talk with real human in real time, Tay demonstrates an amazing development in Artificial Intelligence. By doing so Microsoft has certainly gone a step ahead in studying and researching Artificial Intelligence.

An intelligent chat bot that could learn from internet and people on internet is a great breakthrough. Reese (2016) writes on his article on Tech Republic that in less than 24 hours after her arrival on Twitter, Tay gained more than 50,000 followers and produced nearly 100,000 tweets. There was not any problem until the chat bot started mimicking its followers. Those mimics were offensive language, including pointedly racist and sexist remarks. Tay tweeted about Donald Trump, 9/11, Hitler and socially inappropriate topics. She learned from internet and she used it to reply people. Then later Microsoft put its experiment to end by putting the AI chat bot to sleep saying that they would be making some adjustments. Microsoft even called it a social and cultural experiment.

Microsoft could have been prevented Tay from being socially inappropriate online. Microsoft did not have proper plan for what to do when it came down to situations like this. Microsoft did not think about it. They could have also encoded filters for controversial subjects like Hitler, holocaust and other sensitive topics. But it is not only the fault of Microsoft. Tay was capable of mimicking humans which means its behavior was completely learned from neural webs and human interactions. AI robots like these are mirror to humans and they learn from us. And this is not only one unintentional remark done by AI. Alba (2016) writes on his article on wired.com that Other very public mistakes have exposed AI’s imperfections, including one memorable incident from Google when last July, its Photos app, which automatically tags pictures using its own artificial intelligence software, identified an African-American couple as “gorillas.”

Incident like these raise a big question for the future of Artificial Intelligence. The question about how broad can AI robots be created if they have a history of incidents with Tay or Google’s photo application that uses Artificial Intelligence. Many companies have launched product like these before it was ready and have failed from time to time. So the real struggle is to encode empathy on these machines so that these AI machines can understand better. If this is not done, the future for AI machine is more complex else we could just end up having AI bots that could only do certain things.

Greene (2016) writes on his article in The Wall Street Journal that the artificial intelligence created for Tay was designed “to engage and entertain people where they connect with each other online through casual and playful conversation,” according to the Microsoft Web page announcing its arrival. But certainly things did not go according to the plan. There is no denying that Tay was an amazing robot that could learn from internet and human interactions. It could reply on real time. And it could never stop learning. Even when Tay was tested in lab, Tay had diplomatic answers for the sensitive topics. But once it got exposed to learning from humans online, the lack of feeling for others on machines was not used for judgments. Tay was an outstanding robot that has raised so many questions for Artificial Intelligence whose answers could prove to be breakthrough in Artificial Intelligence field. We still have a long way to study and research on Artificial Intelligence but research like these are amazing achievement in this field. Microsoft also has a AI robot in China name Xiaoice which is used by millions of people by interacting with it on smartphones. It is working perfectly fine which means we cannot always blame on Artificial Intelligence as sometimes these machines are mirror to what we really are as a human bring as they learn from interactions from us, from things we create and from the history we preserve.

References:

Alba, D. (2016, March 25). It’s Your Fault Microsoft’s Teen AI Turned Into Such a Jerk. Retrieved March 28, 2016, from http://www.wired.com/2016/03/fault-microsofts-teen-ai-turned-jerk/

Greene, J. (2016, March 24). Microsoft Muzzles Its Artificially Intelligent Twitter Persona. Retrieved March 28, 2016, from http://www.wsj.com/articles/microsoft-muzzles-its-artificially-intelligent-twitter-persona-1458843873

Reese, H. (2016, March 24). Why Microsoft’s ‘Tay’ AI bot went wrong – TechRepublic. Retrieved March 28, 2016, from http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s