I wonder if some of the generalized pervert-the-bot behavior comes not just from limits testing but from real aggression. Slashdot comments today on AIM unilaterally adding bots to people’s friend-lists, and one of the common reactions is to strike back by breaking the bot. This seems territorial - and I wonder, if it were common in general for bots to leave or blacklist people, if it would be such a normal reaction. People would de-list the shop-bot, but they might be marginally less likely to insult it, on the chance that they would want to use it someday. Of course, in the case of marketing, they want your business anyway.]]>
Yeah, that is a good example Knytetrypper. It reads like a text equivalent of ‘talk to the hand’. For those interested here an an excerpt from Robby Garner’s new blog:
0 : I know what it is
>: In a moment, everything slowed down, and he pulled the hair back
from my company ID shot 7 months ago.
0 : you’re silly
>: To me, men and women were always in a living room doors onto the
sidewalk carried him past fountains and trees were waving about
him wildly like they were a triangle again.
0 : what day is this
>: Today is Monday Mar 28, 2005 12:15:47 PM
0 : thanks
>: (twenty lot)
0 : what does that mean?
>: I pressed play and could hear gunshots outside and I leapt out
the exit door and out into the office that day with new enthusiasm.
0 : goodbye
>: ta ta
I’ve really enjoyed some of the botlogs Robby Garner has posted recently at Robitron. When telling anecdotes, his bot (more or less) politely answers extraneous questions, then continues uninterrupted with its narrative until it finishes. It’s a very interesting and sometimes humorous method of dealing with unrelated input.]]>
Hey JohnP, I hope you don’t mind I fixed up the blockquote. And yes, sorry about lumping you in with the manual editing comment. Well done on creating AI Alex from the ground up!
And yes, the botmaster is definately a mix of writer and programmer. The botmaster traits I was talking about was more to do with how a botmaster writer differs from a writer of non-interactive fiction. Usually, in a book for instance, the writer delivers what aspects of the character they want explored — their appearance, memories, views of a conversation, speech patterns and so on. But in interactive works the creator has to come up with responses to input that can be outside of the storyworld created. Many works, like IF, have a basic system response but bots have to respond as a human and not a machine. So, botmasters have to deal with questions about sex, drugs, politics, trivia and so on even if the character is not interested or has little knowledge of them. So, unlike a reader (who is conveniently silent) a user is very loud and abusive. So, writing for bots involves writing beyond the story or frame that you’re setting up to dealing with any question any human may ask. And yes, a good bot is one that responds well, and to respond well the botmaster needs to anticipate the commonly asked questions, and those commonly asked questions have to do with sex, drugs etc. This is why many botmasters do not put their bots in the public domain or use the subscription model (as I’ve just discovered) — so they can stick to writing about their story, their character only. Imagine if everytime an author wrote a novel he or she had to also field questions about everything in life? It would put any writer off!
So what I’m interested in are 2 things:
1) How can I steer the user away from asking non-storyworld or non-character-based questions?
2) What are the commonly asked questions?
Richard Wallace’s use of Zipf’s Law‘ is helpful, as is Doubly Aimless’ application to Pandora Bots: ‘Demonstration of Zipf’s Law applying to Pandorabots Conversation Logs‘. This nifty programme by Johnathan Harris is pretty cool too: Word Count. Dirk Scheuring has spoken about the writer/programmer thing before and here are some papers (some I posted previously):
De Angeli, A., G.I. Johnson and L. Coventry (2001) ‘The unfriendly user: exploring social reactions to chatterbots’ presented at Proceedings of the International Conference oh Affective Human Factor Design, London, published by Asean Academic Press [pdf]
Norman, D.A. (1997) ‘How might people interact with agents‘ in Software Agents (Ed, Bradshaw, J. M.) AAAI Press/The MIT Press, CA, pp. 49-55.
Russell, R.S. (2002) ‘Language Use, Personality and True Conversational Interfaces‘ [Honours] Artificial Intelligence and Computer Science, University of Edinburgh, Edinburgh [Online] Available at: http://www.geocities.com/rorysr2002/
Zubek, R. and A. Khoo (2002) ‘Making the Human Care: On Building Engaging Bots’ presented at Proceedings of the 2002 AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment, published by Artificial Intelligence Association [pdf]
And thanks for adding the 82ASK. It is interesting that they too have an editorial policy:
We do however reserve the right to refuse to answer questions if we believe it to be inappropriate, outside the scope of our services, defamatory, illegal, offensive or in any way anti-social.
Perhaps we should add ‘I reserve the right to refuse to answer questions outside of my storyworld’ to a bot disclaimer?…:)]]>
Oops. Sorry about that. Looks like I messed up a blockquote tag somewhere. Still, you should be able to make sense of what I’m saying… I hope!]]>