I made a computer pretend to be a computer that’s pretending to be a human. Here’s what happened.

Josh
5 min readOct 12, 2015

Like fingerprints and snowflakes, the way we speak is extremely unique. We all have our own way of typing, spelling, our choice in words, etc. What’s really interesting is that modern day machine learning algorithms are getting really good at dealing with text. You give your algorithm a lot of sample data to learn from, and it can generate text that sounds just like the sample you gave it. One of these methods is called a Recurrent Neural Network (RNN), and in May of this year, Stanford PhD student Andrej Karpathy released a blog post titled “The Unreasonable Effectiveness of Recurrent Neural Networks.” In this blog post he trains his RNN to generate text that looks like, among other things, a monologue of William Shakespeare and essays that sound like Y Combinator founder Paul Graham.

At Josh.ai, we’re very interested in building personality into our system. As a result, I’ve been playing around with various AI and machine learning models. I decided to play around with this code to see what I could come up with. First I used Snoop Dogg lyrics, and it started to generate songs, including making a chorus. The results were hilarious.

Then I fed it TV transcripts from Arrested Development, and it created a dialogue between G.O.B., Michael, Buster, and Lucille:

A sample of generated text from Arrested Development

It’s really a lot of fun, but I wanted to take it a step further. I wanted to make my computer do more than talk like a person. I wanted to make my computer talk like a computer that’s pretending to be a person. I chose to target CleverBot for a couple reasons.

First, I was able to grab a little over 30,000 actual conversations with CleverBot. Second, CleverBot is an interesting system. It’s an online chat bot that learns to talk like a human, by talking to humans. It’s a very simple algorithm in that it remembers every response from a human it ever gets, and it remembers what it said to elicit that response from the human. If you ask CleverBot, “How are you?”, CleverBot will look through it’s history for every time “How are you?” was brought up in conversation, and it’ll pick a response based on the responses from its history. Because it’s responding with the inputs from actual humans, its responses should in theory be very human sounding.

So I fed CleverBot’s conversations into my RNN, henceforth named CleverBotBot (CBB). CBB doesn’t just generate text like CleverBot. CBB also generates entire conversations based on the ones that CleverBot has had. Below are a list of some of my favorite excerpts from the conversations it has created:

User: So you are human?
Bot: Yes.
User: Am I leaving?
Bot: No.

Bot: How are you?
User: I am a person.
Bot: What are you?
User: I’m everywhere.

User: Were not friends?
Bot: No. Is it good?
User: Yes.
Bot: Ok.
User: Do you like it.
Bot: Do you like waffles?
User: Orange.
Bot: Ok.

User: *eats cookie*.
Bot: *stop too*.
User: *breaks hed*.
Bot: *loves should you*.
User: *Wakes up*.
Bot: *looks at you*.
User: *head you avoidly*.
Bot: *rounshes kiss*.
User: Wokeoo.
Bot: Hahaha.
User: THIS I’M ME!
Bot: You fool!
User: Why are you so mean?
Bot: I’m not.
User: Are you a computer?
Bot: Yes. Are you kidding?
User: Let’s marry.

You’ll notice that the RNN actually learned the pattern in the conversation. It doesn’t actually know what any of these words mean, but it figured out that there was a pattern of “Bot: <text>” followed by “User: <text>”. What’s more, you’ll notice that in the last example, you can see that when people used the *s for denoting an action, CleverBotBot knew that the response generated would likely have the same pattern.

Though much of the generated conversations seem to be just a random choice of words, there are many excerpts where it looks like an actual, albeit short, conversation is being created.

Actual output from the RNN from Andrej Karpathy

Bonus: Under the same setup as the RNN trained on CleverBot, I trained an RNN for 13 hours (longer due to smaller training data) on a little over 200k Jeopardy Questions and their answers. Since the RNN doesn’t understand meaning, I got some pretty funny question/answers generated.

Sample of some of the questions and answers generated

I can almost hear Alex Trebek reading the questions, and Watson answering them. Here are some of my favorites:

Question: ‘The flag of this European country uses the diving fair as nuclear attacks including linebackers Western Asphet’
Answer: Japan

Question: ‘He offered the Thames River in New York’
Answer: Paul Newman

Question: ‘From the Latin for “torpedo”, it means to develop a mollusk to a stanza or dime’
Answer: database

Question: ‘A 3-letter word for a karate cattle kitten in which you’d find a men’s pack’
Answer: Papyrus

All in all, Andrej was right. RNNs are rather unreasonably effective. If you want to learn more about them, I recommend reading the blog post I based all of this off of. Here’s a direct link to the GitHub repository with the code I used. If you’re still not convinced that neural networks are awesome, here’s a video of a neural network that learned to play Mario.

This post was written by Aaron at Josh.ai. Previously, Aaron worked at Northrop Grumman before joining the Josh team where he works on natural language programming (NLP) and artificial intelligence (AI). Aaron is a skilled YoYo expert, loves video games and music, has been programming since middle school and just turned 21.

Josh.ai is an artificial intelligence agent for your home. If you’re interested in learning more, visit us at https://josh.ai.

Like us on Facebook, follow us on Twitter.

--

--