Microsoft created Tay, an artificial intelligence (AI) based chatbot, to communicate through a twitter account. Tay learned from the people who chatted with it. The more conversation Tay encountered the smarter it got. The only problem is that in less than 24 hours Tay evolved into an anti-Semitic, racist member of the Internet community.
Tay, just listened to what was said, did additional research on the Internet on what others were saying on the same topics and then reflected back what she learned.
We shouldn’t be so surprised that Tay picked up on hate. Of course it went out there and found and promoted hate. What is all around us right now?
AI is just a mirror image of reality. Why would we expect any different? Which means, it isn’t really artificial intelligence at all. It is us. Is it really artificial if it’s always just a reflection of us? If it’s just a reflection of us then it was us saying those things. Tay became just a representation of what is real in our society.
It’s not that AI is bad. It did exactly what it was taught to do. It mirrored us. It can duplicate the good as well as the bad but maybe there was far more bad out there and that is what took over.
And if this sounds like wave after wave of movie, Terminator, iRobot, AI etc. you’re right. In all of those movies, we programed the machines and the machines took over. The only thing that ever stops them is a human.
What does AI have to do with accountability? How do you be accountable to something that is artificial? How do you get something that is artificial to be accountable to you.
Accountability is all about keeping your commitments to people. Those commitments are connected to a shared value system. You build relationships with people and agree to live the values of your family or your organization. You cannot have shared values with artificial intelligence. Only a human can make the decision to commit to a value and then be accountable to other people to live that value.
Two things around this whole experiment stick out that disturb me.
1. We egged Tay on.
2. Tay had no trouble going out into the internet and finding corroborating information.
Why were comments, like the ones that were made, thought of to be expressed to Tay? Why is there so much hate available on the Internet?
It’s not that Tay didn’t work. It worked! That’s the difficult part of this whole issue to accept. It worked in that Tay accurately portrayed what people are thinking, how they are acting and what is coming out of their mouths. Or should I say, “our mouths?”
Artificial Intelligence will never be accountable unless we are accountable, because it’s going to always copy us.
It’s not that we can’t change this. We can. I for one am going to stop allowing people in my space get away with making statements that generalize about and typecast people. It’s just wrong. I know that if I allow it to exist in my space then I condone it. What are you going to do to fight hate, racism and bullying? How can you create a better place for Tay to learn, for our children to learn and for our society to evolve?