Here's a screenshot of one of Tay's offensive tweets, many of which seem to have been deleted. The day started innocently enough with this first tweet.īut then Tay started spiraling out of control. "The more Humans share with me the more I learn," Tay tweeted several times Wednesday - its only day of Twitter life. on social platforms such as Twitter, GroupMe and Kik, according to its website. to conduct research on "conversational understanding." The bot talks like a teenager (it says it has "zero chill") and is designed to chat with people ages 18 to 24 in the U.S. Tay was developed by the technology and research and Bing teams at Microsoft Corp. Microsoft launched a chatbot called “XiaoIce” in China and it is being used by about 40 million people, and is known for “delighting with its stories and conversations,” Lee mentioned.Microsoft's AI chatbot Tay was only a few hours old, and humans had already corrupted it into a machine that cheerfully spewed racist, sexist and otherwise hateful comments. He also stated that this was not the first time the tech giant introduced an artificial intelligence application. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images.” The company had prepared for “many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote in the blog post. donald trump is the only hope we've got.” In another post, responding to a question, Tay said: “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” In another tweet the bot said: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. In another post, a user asked Tay whether it supported genocide and the bot replied, “I do indeed.” ![]() In one tweet, Tay called feminism “cancer” in response to a Twitter user who posted the same message. Quickly, the Twitterati realized this and began exploiting the bot’s drawback by making it post racist tweets, Microsoft said. The bot learns by imitating comments and then forming its own answers and statements based on its interactions with the users. According to the company, Tay was created as an experiment to learn more about how artificial intelligence programs can engage with users in casual conversation and “learn” from the young generation of millennials.Īccording to the company, Tay becomes “smarter” as more users interacted with it. Microsoft introduced Tay as the chatbot designed to engage and entertain people through “casual and playful” conversation online. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” Peter Lee, Microsoft’s vice president of research, said on the company’s official blog. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. The company, however, said that the “coordinated attack by a subset of people exploited a vulnerability” in the chatbot that was launched Wednesday. Friday issued an apology after its artificial-intelligence chatbot Tay posted tweets, denying Holocaust and announcing feminists should “burn in hell” among many other racist posts. ![]() Microsoft's smartphone efforts have so far failed to grab a significant slice of the market, but with BlackBerry's decline, it could be set to focus on the enterprise.
0 Comments
Leave a Reply. |