Photo: bodnar.photo/Shutterstock

Microsoft Built an AI and the Internet Immediately Turned It Into a Nazi

News
by Matt Hershberger Mar 25, 2016

MICROSOFT JUST LEARNED THAT the internet is a dark, scary place. They started a Twitter chat bot named Tay that was designed to mimic the speech of a 19-year-old girl. The Artificial Intelligence (AI) was intended to help “conduct research on conversational understanding,” Microsoft said, and so when it went live, it was able to learn from the people that it talked with and to mimic they way they talked.

The problem was that Microsoft didn’t design their bot to avoid tricky or political topics, meaning that it would mirror the opinions of the people that talked to itThis, naturally, was too good of an opportunity for the internet’s trolls to pass up, and things got real ugly, real quick.

Within 24 hours of the bot going online, Tay had turned into a complete dirtbag racist. Aside from Holocaust denial and Nazism, the bot also started mimicking Donald Trump’s opinions on immigration, 9/11 conspiracy theories, and naturally started using a lot of horrible racial slurs.

Microsoft decided to shut it down. This isn’t to say that the bot wasn’t a success: it mimicked the people it was talking to perfectly. What Microsoft failed to take into account, however, was that the people it was mimicking were internet trolls. The internet turned a high-tech experiment in Artificial Intelligence into nothing more than just another jerk.

Discover Matador