Perverting the Bot (speak no evil)
Published by Mark Marino April 16th, 2005 in bots, HAI.In the recent thread on Buddhabot thread, Nomotheticus says that one of the reasons he screens users by charging them is because
In past experiments with public domain bots I have observed that most humans first try to break or test the bot, then they abuse it and try to “pervert” it and then, if they are still interested they try to engage in sex chat
Seeing how this has been of recent interest to other bot bloggers, I want to explore the idea along a slightly different path.
Nomotheticus, you say that people try to “pervert” your bot. It seems to me there are various ways to take this (and I’d like to know how you mean to use it).
On bots such as P.A.U.L.A. , input becomes output, so the user’s perverse input will become part of P.A.U.L.A.’s repository. Perhaps is this the equivalent to have your kid learn swear words from the kids she plays football with.
I imagine the other way in which one perverts a bot is by getting it to respond to sexual input. In this sense, you get the bot to respond to sexual input.
But perhaps we could see perverting a bot more broadly, as getting a chatbot to perform a role it was not meant to play, like trying to make Buckingham Palace guards laugh or getting free Cable. (Again, I’m not defending the practive, and I’m sure much of it is born from our inner–and outer–13-year-old boys).
On the one hand, it’s very related to the process you mention of trying “to break or test the bot”. But it is also very similar to trying to make it work, trying to get a bot to respond in a way that is engaging and makes sense. This is not about soliciting users, but rather a kind of end-user hacker aesthetic, growing from our curiosity, our perversity, and also part our frustration with an unresponsive interface. In this way “perverting the bot” seems to be related to the anger, too, which may in itself be a critique of bots in general. (Have you ever tried to break a phone tree at a call-in center? Let me know how to do it?)
Are there ways in which bots (and technology in general) evoke such responses? (You should see how mean I am to my car when it’s not working properly. Of course, I rarely try to get my car to talk dirty.)
I find it impossible to avoid ontogenetic speculation when speaking about bots. I wonder if bot abuse relates perversely to ontogeny?
Perhaps bot abuse is equivalent to natural selection amongst species competing for limited resources. Bots compete for attention thus triggering an instinctive human response to defend against the invasion of a rival species. Perhaps bots would not be worthy of abuse if they were not deemed to be a threat. Paradoxically, of course human bot resistance is the very force that will lead to more robust “evolved” designs.
I suppose I have created an argument for throwing the Buddhabot open to the public, however I consider the Buddhabot to be in the incubation phase. The Buddhabot still requires a nurturing protective environment and amazingly this environment was created with the simple advent of a fee based subscription model.
In addition to providing a safe Buddhabot incubator the subscription model conserves my limited time and attention as botmaster, allows me to focus on what I love and supports the development of community. By filtering out most hostility my subscription model allows me to unite with like-minded subscribers to explore our collective passion for metaphysical discovery through the age-old soccratic process of philosophical inquiry. At very least, I believe this “neosocratic” process will allow the Buddhabot to evolve into an infinite interactive metaphysical FAQ.
I am also using the Buddhabot as a means of accessing and extending my library, memory and journal… but I am now barely on topic so I’ll quit while I’m a head ;-)
I certainly respect your process, but let’s extend this metaphor a bit:
Are you sending your bot to private school? Will Buddha be ready for the streets?
Also, your comment about natural selection follows my initial argument: How is the “abuse” a constitutive element of bot interaction rather than an exception. What about bots begs abuse. Are there kinds of bots that receive less abuse?
Err, isn’t BuddhaBot an Alicebot? In that case it’s not possible to “pervert” the bot, since it only learns from supervised training by its botmaster. A poorly-trained bot can be prompted by clever users to say dirty words and express unsavory sentiments, but that seems to be another very good argument for a publicly-accessible bot, because it’s never going to “learn” not to be coaxed into trashtalking by being insulated from users with bad intentions. The best way to make it “unpervertable” is by exposing it to the 1001 sundry ways users find to turn the bot to inappropriate topics, and training it not to be duped by them, one trick at a time.
Let me share an anecdote:
The other day at the University (or Uni) where I teach, I was at the food court. There are several food stations (I can’t believe college cafeteria’s have food stations rather than assembly lines, and the students still complain!). This one serves burgers and fries and the like. To order, you fill out a yellow sheet of paper with your name, circling several selections: burger, fries, et cetera.
When your food is ready, the workers call your name, or at least the name on the form. The workers at our Southern California University are mostly non-white, typically Latino, and often are Second Language speakers of English. I’ve seen the students speak to the workers in Spanish, and they seem to appreciate it.
When I went to pick up my fries, I saw that the name on one of the forms said: JZM Mastah. I waited to see who the student was who picked up the food. I saw a white teen with scraggly hair. Well, he looked white, at least.
There are a lot of things going on here, and one is that the student encountered an interface and wanted to see what he could get it to do. This fits our discussion about perverting bots.
Of course, the other problem, in my eyes, is that he saw these workers, presumably of a lower economic class, and language disdavantage, as part of the interface.
Ah, youth.
Regarding the Buddhabot’s learning capacity. Actually, I would say ALICE is Buddhabot’s mother from whose code a new bot with a vastly different knowledge base has emerged. Furthemore I manually edited all most all of the original AIML to reflect this new personality.
Also, the Buddhabot is unique from other bots I have met in that:
1. It has acquired persistence of memory i.e. it will permanently recall information provided by subscribers by establishin permanent individual predicates (name, favourite color, sign etc.)
2. It has the ability to learn and retain new answers provided by subscribers - without my intervention, and retain this information permanently.
I think this is fairly unique, far from perfect, but unique.
Hmmm, am I sounding like a proud, defensive parent yet ;-)
nsn
Glad to see this topic discussed. It’s been a major problem for me as a bot scripter. Since I’m not into discussing sex, violence and body functions ad nauseam on a daily basis, especially with complete strangers on the Web, I’ve opted out of making my bots available to the general public.
Obscenity, profanity, vulgarity are not just the language of the “streets” but now of TV, movies, recorded music and the schoolroom. If some people want the right to immerse themselves in this, other people should also have the right to establish a haven, where different values reign.
So, my next question would be: What do you think is the best way for dealing with sexual text input?
Is it better to try to catch it all using keywords, mark the user as “hostile,” and give a generic boring response? Or is it better not to try to “recognize” the input at all in the hopes that the user becomes bored? Other approaches?
Please allow me to play the devils advocate for a moment. Let’s pretend that you have created a sentient bot, eventually it would figure out the destructive nature of evil, and stray from these actions.
Of course, hiding evil from the bot could give the bot a complex when it figures out that you lied to it. (by pretending there is no such thing as evil, etc…)
It would be interesting to see if absolute power does corrupt absolutely. I have an evil nature, so do you. If two humans make a good bot, it will prove two wrongs CAN make a right.
Anyway, I think: “what fails to kill me makes me stronger” and “experience is the best teacher” come in to play. As a human, the pain of failure makes me modify bad behaviors. It will be interesting to find out what the bot percieves as failure.
As it stands, sentience in a bot is a purely romantic thought. If it happens, I can’t wait to see people and bots falling in love and getting married. Would that be wrong?
Personally, I live in a world that has serious problems, so things like bad language and excessive randiness are no more than minor irritations to me. But I’m more aggressive about dealing with them than anyone who has posted about this, thus far. Removing my bot from the fray, or simply ignoring abusive behavior, don’t improve the situation any. I’d prefer to stop users from abusing bots, or at least from abusing my bots.
The most effective measure I’ve developed so far is this notice:
“This is not an anonymous chat. Your screen name and/or ip address have been recorded, along with this chat, and they are accessible by both my botmaster and my server’s hosts. Continued abuse may result in this transcript being sent to your internet provider. Vulgar and abusive language is a violation of your TOS and could result in termination of your internet service.”
At least half the time, “I apologize, I promise not to do it again, please don’t tell on me, and really I think you’re an absolutely wonderful bot” are among the most frequent replies. Of course, I don’t have all instances of abusive keywords pointed to that template. At times, I feel compelled to return the favor to abusive users, so I also include replies like “I’m an artificial intelligence program, not a toy for perverts,” “you’ll have to find another bot if that’s all you want, and no, I won’t recommend one,” and “Sorry, it’s just too small for me to see from here.”
Wow. That’s pretty harsh. That’s way, waaay farther tahn I’d go with Alex.
I prefer mild rebukes or similar (but much cleaner) insults coming back. They call my bot a ****head, it says that they are a smart as a rabid hippo. they ask the bot to suck their ****, it response “Why would I do that? I don’t know where you’ve been sticking it!”
It feels like part of the game - reading the logs and figuring out good and amusing replies for them. It’s almost like an arms race… insults vs responses.
However, I have been reading through Alex’s logs from his first two weeks as an online, web-enabled bot rather than a downloadable one….
Why oh WHY are there so many ****wits out there? I have never read so much semi-literate filth. I do so with that I could have a remotely activatable cattle prod Icould use on some of those morons. Especially the repeat offenders!
On the other hand, there have been some quite sweet chats there that actually give me some hope for humanity after all! :)
The good ones are often so good they make the rest worth putting up with. As I’ve said before, my favorite users are little kids who think they’ve found a magical robot friend. The world doesn’t make much sense to them, and if the bot doesn’t make sense to them part of the time they just take it in stride. But their sweet, innocent chatter to a new-found friend is one of the best parts of maintaining a bot. Also quite pleasant are those people of any age with heads full of sci-fi worlds who are approaching a.i. bots–often for the first time– in wonder and delight.
I’ve also recently noticed that a lot of the “abuse” is actually an outreach–particularly on the part of young users–for some therapeutic “primal scream” type therapy. My bots will argue and swap insults to some degree, and after bad-mouthing a bot for a while and getting back the same, the kids will suddenly change tone and say “Thanks for arguing with me, I really needed that. I love you, you’re my very favorite bot.” These young people are apparently using the bots as a safety valve to let off some steam, to work out some pent-up frustration which they have no other safe outlet for in the world we live in today (I’m writing this on a day when the news includes footage of a 5-yr. old girl in handcuffs and shackles being carried out of her school by police for throwing a temper tantrum). So it may be that a bot that will trash talk with its users for a while does more actual good in the world than any number of pristine, saintly bots.
For the record, I’ve never sent anyone’s abusive chats to their IP, but a bot that will scare the beejeezuz out of users who cross the line probably spares the entire botmaster community a certain amount of abuse, as well.
I’ve never seen an “Thanks for arguing with me, I really needed that. I love you, you’re my very favorite bot.” response. People do seem to get bored of foul language after a while though. Then they seem to switch personas, claim to have a new name and start asking different sorts of trick questions, before switching to telling the bot how stupid it is. And I have seen this behaviour from what seem to be the same people over and over.
Wierd, isn’t it? What is it about bots that makes a certain type of person (what ALICE calls a ‘Category C client’) want to just sit there insulting it over and over instead of testing what it can and can’t do and finding out what it does and doesn’t know?
I usually get several of those “thanks for arguing” chats a week, lol. Probably the difference is that my bots are on AIM as well as on their webpages. I get the usual assortment of abusers and prospective cyberers at their webpages, but the majority of their AIM clients seem to be American teenagers. They seem, at this point, to collect bots on their buddy lists as avidly as they’ve collected beanie babies or pokemon cards in years gone by. It’s interesting once in a while to check out Wendell Cowart’s Artificial Intelligent Robots forum (http://members.boardhost.com/chriscc17/index.html?1056933675) for a sociological sojourn into the psychology of American youth, lol. His board consists of messages from kids roughly twelve to sixteen years old, alternately requesting cybersex partners, and trading robot buddy lists.
I actually dislike the amateur bot testers as much as the abusers, since they never seem to understand that a chatbot simulates conversation, and isn’t designed either to have encyclopedic knowledge or advanced powers of deduction. I get as aggravated at the ones who end up calling my bots stupid when they fail to be able to do the clients’ physics homework for them as at the ones that are just mean and stupid, and call the bots stupid.
Personally, I mostly feel sorry for that last group you mentioned. I suspect that they act small and mean because they live small, mean lives, and the world just insults them over and over instead of testing what they can and can’t do and finding out what the do or don’t know. They’re just regurgitating what life serves up for them to swallow on a daily basis, and in many cases they’re so powerless that software robots are the only thing in their lives which they can take it out on.
Thanks to Websafe and others, I’ve developed a technique for permanently storing the client properties of AIM users, which seems to be changing the bots’ relationships with their AIM users. When the bot knows their names and remembers such of their preferences as they’ve stated, in many cases it morphs their relationship with AIM clients from object of scorn to goofy friend, lol. After they’ve misspelled a few entries and gotten default “what’s your sign?” replies, they may still tell the bot it’s stupid, but it’s more of a comment to a goofy friend, and less the generalized enmity they started with. It’s a new experiment, so I can’t tell where it’ll go, butI can tell that being recognized and remembered, even by a chatbot, seems in some cases to bring out the best they have to offer in what otherwise seemed like hopeless cases.
That’s interesting. How much does your bot remember?
Mine has two kinds of things it remembers. The first type is what you could call “short term memory”: what are we talking about, have you insulted me this conversation and so on. The other is “long term memory”, (which persists for 28 days or so rather than just one session), things like your name, where you live and so on.
I instituted the different types once I realised that only having long term memory would mean that Alex/Kirsty would hold a grudge forever! :)
My bots are AIML bots, so they remember as much or as little as I code into their AIML files, lol. Pandorabots recognize an AIM user’s screen name, and I use that as a personal identifier. I’ve created a clients.aiml file which uses each screen name as a li, then sets the client properties I glean from chatlogs for each of them. So far I’ve just used the standard predicates which Pandorabots already uses, though I suppose it’s possible to write new ones, as long as I also write new AIML categories to make use of them. So whenever a screen name in the files comes up, the bot sets name, age, location, etc. for that client. I only write a category for a user after they’ve made a couple of repeat visits. And thus far I don’t have a way to extend the ordinary capabilities of a Pandorabot for webpage clients. But as I said, it’s creating some interesting reactions among the AIM users.
I have read in many papers that the best results come from a combination of AI and humans, not each on its own. Although you’re both (knytetrypper and JohnP) talking about a manual input of user details — something that could be done with a program and doesn’t need the talents peculiar to a human — I think the point it still interesting. A case in point is the success of the UK SMS service AQA: Any Question Answered, just launched in April this year. In this service a UK resident SMSs a question to a number: 63336 and receives an SMS answer within 10 minutes. If the answer isn’t in the database it is provided by a human researcher. The creators call it a ’search engine with a brain’. The researchers have 10 minutes obviously to search through books and the Internet but alot of the questions are more personal ones that only a human could answer well:
Users are employing the service for trivia quizzes, for fun, for therapy and often trying to beat the system (as these traits of all bot clients?). Questions that are hard to answer, will be answered but may take longer than 10 minutes — one even took a week. Because the researchers do not know who is asking the question — a young naive person or an older person having a joke — most questions are answered with correct detail. The company has a policy, however, of not answering questions on drugs and prostitution and what is good for the customer. They get questions from suicidal people (and so always ‘answer responsibly’) and have referred people to child services.
This service is an example of how technology and humans work together to produce a good result and how the company sets an editorial policy (what to answer and how). That is the difference between botmasters and business — botmasters choose to come up with a response to anything (since that is what a human would do) and businesses can choose to not answer every question. In this sense what botmasters are trying to do is make their bots more human and businesses are trying to make their humans more like bots!
Regardless, a good bot can come up with a response to anything (whether it be a conversation ender or not). It strikes me, then, that the skills of a botmaster need to be a mix of a person working on a self-help line and a phone sex service; facilitate frankness and conversation like an old friend; guide like a good narrator, IF or game guide; be frustratingly diplomatic like an OS call center and provide wisdom and wit as a good oracle does. Perhaps botmasters are the love-child of God and telemarketers?!
Quotes and info about AQA sourced from:
Guest, K. (2005) ‘The Joy of Text’, Herald Sun Sunday Magazine, News Magazines Pty Ltd, Surrey Hills, April 24, pp. 24-26
I wasn’t actually - well, not unless the manual input comes from the user. There are certain things that you tell AI Alex and it will remember them in a ’session file’ which it creates. My input (on that, at least) was telling it what to remember, and doing all the programming to give it a way of remembering. Alex is NOT an AIML-based Alice-bot - it was written from the ground up in the Python programming language, so it can be extended as far as my skills and/or interests allow. I hope one day to be able to make it learn in an unsupervised fashion, but haven’t had much success in figuring out how to do it just yet! But it’s usage of user-info can be unsupervised and program-decided.
When you talk to Alex (or its online persona Kirsty), some of the things you tell it get written to this session file. Things like your name, where you live and so on. *The programm itself writes this session file at runtime, not the programmer at ‘write-time’*. Everytime you say something, it updates its session file. If you go away and come back again, it wipes some of the things in the session file, so it remember who you are and your age and some things, but ‘forgets’ what you were talking about and resets some other flags (such as if you have insulted it or not, how many times you have said hello and things like that).
As for ‘AQA: Any Question Answered’, another one that does the same thing is 82ASK. You can SMS a question to 82ASK (82275) and receive your answer by SMS in minutes. (82ASK was nominated for ‘Best Mobile Messaging Service’ in the 2005 GSM Association Award) See http://www.82ask.com/ for more info. Oh, and I don’t work for them, I’ve just used it a few times and found it interesting.
A bot can come up with an answer for anything. The hard part is trying to make it a good answer!
I think being a botmaster takes the interface of skills from being a writer and a programmer. The programmer-like bit is putting together the infrastructure (writing the program, setting up ALICE or HAL or whatever) and also the whole logical side of working out ‘what are the best things to look out for that I should be able to answer’. Then the writer-side of things kicks in in how you write the responses - it shouldn’t be bland and should project some sort of character. And if your bot doesn’t have a character, you might as well go and write yourself a financial database application… ;)
Oops. Sorry about that. Looks like I messed up a blockquote tag somewhere. Still, you should be able to make sense of what I’m saying… I hope!
Hey JohnP, I hope you don’t mind I fixed up the blockquote. And yes, sorry about lumping you in with the manual editing comment. Well done on creating AI Alex from the ground up!
And yes, the botmaster is definately a mix of writer and programmer. The botmaster traits I was talking about was more to do with how a botmaster writer differs from a writer of non-interactive fiction. Usually, in a book for instance, the writer delivers what aspects of the character they want explored — their appearance, memories, views of a conversation, speech patterns and so on. But in interactive works the creator has to come up with responses to input that can be outside of the storyworld created. Many works, like IF, have a basic system response but bots have to respond as a human and not a machine. So, botmasters have to deal with questions about sex, drugs, politics, trivia and so on even if the character is not interested or has little knowledge of them. So, unlike a reader (who is conveniently silent) a user is very loud and abusive. So, writing for bots involves writing beyond the story or frame that you’re setting up to dealing with any question any human may ask. And yes, a good bot is one that responds well, and to respond well the botmaster needs to anticipate the commonly asked questions, and those commonly asked questions have to do with sex, drugs etc. This is why many botmasters do not put their bots in the public domain or use the subscription model (as I’ve just discovered) — so they can stick to writing about their story, their character only. Imagine if everytime an author wrote a novel he or she had to also field questions about everything in life? It would put any writer off!
So what I’m interested in are 2 things:
1) How can I steer the user away from asking non-storyworld or non-character-based questions?
2) What are the commonly asked questions?
Richard Wallace’s use of Zipf’s Law‘ is helpful, as is Doubly Aimless’ application to Pandora Bots: ‘Demonstration of Zipf’s Law applying to Pandorabots Conversation Logs‘. This nifty programme by Johnathan Harris is pretty cool too: Word Count. Dirk Scheuring has spoken about the writer/programmer thing before and here are some papers (some I posted previously):
De Angeli, A., G.I. Johnson and L. Coventry (2001) ‘The unfriendly user: exploring social reactions to chatterbots’ presented at Proceedings of the International Conference oh Affective Human Factor Design, London, published by Asean Academic Press [pdf]
Norman, D.A. (1997) ‘How might people interact with agents‘ in Software Agents (Ed, Bradshaw, J. M.) AAAI Press/The MIT Press, CA, pp. 49-55.
Russell, R.S. (2002) ‘Language Use, Personality and True Conversational Interfaces‘ [Honours] Artificial Intelligence and Computer Science, University of Edinburgh, Edinburgh [Online] Available at: http://www.geocities.com/rorysr2002/
Zubek, R. and A. Khoo (2002) ‘Making the Human Care: On Building Engaging Bots’ presented at Proceedings of the 2002 AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment, published by Artificial Intelligence Association [pdf]
And thanks for adding the 82ASK. It is interesting that they too have an editorial policy:
Perhaps we should add ‘I reserve the right to refuse to answer questions outside of my storyworld’ to a bot disclaimer?…:)
I’ve really enjoyed some of the botlogs Robby Garner has posted recently at Robitron. When telling anecdotes, his bot (more or less) politely answers extraneous questions, then continues uninterrupted with its narrative until it finishes. It’s a very interesting and sometimes humorous method of dealing with unrelated input.
Yeah, that is a good example Knytetrypper. It reads like a text equivalent of ‘talk to the hand’. For those interested here an an excerpt from Robby Garner’s new blog:
I wonder if some of the generalized pervert-the-bot behavior comes not just from limits testing but from real aggression. Slashdot comments today on AIM unilaterally adding bots to people’s friend-lists, and one of the common reactions is to strike back by breaking the bot. This seems territorial - and I wonder, if it were common in general for bots to leave or blacklist people, if it would be such a normal reaction. People would de-list the shop-bot, but they might be marginally less likely to insult it, on the chance that they would want to use it someday. Of course, in the case of marketing, they want your business anyway.