Bot Abuse, Interactive Misuse
Published by Jeremy Douglass May 19th, 2006 in Uncategorized, Poetics, bots, Features, Text Art, HAI, Fictionality.This April 22, CHI 2006 hosted the second annual “Abuse” workshop on interaction gone bad. Last year, the theme was “The Darker Side of Human-Computer Interaction.” This year’s theme: “Misuse and Abuse of Interactive Technologies.” The first workshop was chaired by Antonella de Angeli, Sheryl Brahnam, and Peter Wallis, and they were joined in organizing the second by Alan Dix. Although proceedings were not recorded, PDFs are available of the initial workshop papers from both years.
“Abuse” generates considerable interest here at WRT, where we have often been drawn to the dark side of the text, including Mark Marino’s thoughts a year ago on Perverting the Bot, my series on Frustration in Interactive Media [1,2,3,4,5], and Christy Dena on when ARGs go bad, Caught in the Headlights. This winter at DAC 2005’s HCI workshop “The Theory and Practice of Experience Design,” I also delivered a presentation on “The Experience of Frustration” that, if not focused on bad behavior, certainly maps out some of its preconditions, in particular the failure of expectations.
Bot authorship is in many ways an art of expectation management - on the one hand, an art of expecting and anticipating user behavior, and, on the other hand, an art of crafting the user’s own expectations through framing the bot persona and conversational scene.
How to anticipate user behavior?
Dirk Scheuring recently posted an interesting series on bot pattern matching [1,2,3], beginning with the Task List of Juergen Pirner, author of the chatbot Jabberwock. Pirner’s list of user behaviors are intended to test (and often, break) a bot interface, and parts of the list seems reflective of real practical experiences with the generally antagonistic and often abusive flavor of Turing Test commonly conducted by casual internet surfers. A few examples:
- impolite: calling the candidate names (”do you understand me, dimwit”), calling the candidate a machine (”a machine like you”, “what’s up, robot?”)
- annoying: expanded text (”c a n y o u r e a d t h i s”), repeating words (”kill kill dog dog dog dog”), typing nonsense (”dsfdh jkjjh”)
- copy-cat / parrot / echo / mocking
- ignorance: asking counter questions instead of giving answers (”Did
you?”, “Can you?”, “Such as?”)
Sheuring goes on to discuss how Zipf’s law and statistical data both indicate that most bot pattern matches fail or default, and what this suggest for strategies of pattern matching design. In the face of expected failure and expected user abuse, many designs come down to anticipation, through either statistical optimization or the massive aggregation of patterns - for example, in the commercial “Superbot” data available from A.L.I.C.E. (”Top10k Inputs… Top10k AIML Patterns”), or in the Catty, the “excercise in artificial futility” that runs pattern matches against internet search engine results.
Andrew Stern of GTxA has discussed both the emergence of trying-to-break-it behavior towards bots and, more generally, the (im)moral treatment of virtual Characters, a broader idea of acting-out in virtual spaces that reminds me of the pop phenomenon of Sims-torture two years ago, what Boing Boing termed “Pulling the wings off your Sims.” Stern’s idea is that the basic unreality of virtual interlocutors may trump whatever behaviors they do or don’t exhibit, and this disturbingly implies that problems of abuse and misuse may lie beyond the ability of HCI design to remedy. Even if we were to encounter a brilliant, eloquent, and despondant Hamlet on some holodeck, might we be inclined to abuse him merely because of his virtual status?
Perhaps Hamlet is not the best virtual example, as the urge to abuse witty sulkers is common enough in normal society. Indeed, the social science and psychology juries may still be out on the human varients of bot abuse, including message board trolling and MMOG griefing. Can we disentangle the two kinds of abuse, one kind arising when the interface renders others inhuman, the other kind only targeted at human (or sufficiently human-like) subjects? Is antagonism towards bots a sign of their nigh-inevitable failure to pass as human, or their disturbingly easy success at being human enough?
The catagorical abuse of virtual characters is not the whole picture. It is possible to manage user expectation. This framing is what I find most interesting, because aesthetics enter strongly into it. Who is the bot, and what is the conversational situation, such that certain kinds of conversational failures aren’t, and certain kinds of abuse isn’t? How are persona and scene an act of framing that can change expectations?
How to manage user expectation?
The classic bot act of framing is of course ELIZA’s framing as a Rogerian psychotherapist, with numerous generic responses implying not so much incomprehension as a layer of clinical detatchment appropriate to the Rogerian language game. The analysand types “SAY FUCK” and her therapist jots something down on a clipboard before asking “WHY DO YOU WANT ME TO SAY FUCK?” The implicit topic of an ELIZA session is not just what-the-user-meant, but whatever-the-user-said - the perfect topic for a pattern matching program. Previous discussions on WRT of chatbot framing through persona have included Christy Dena on Ron Ingram’s Buddhabot, and myself on The Puppetpresident bot.
Interestingly, the therapist, the deity, and the president are all figures that presume an unequal power relationships with the user. Their authority grants them both a degree of detachment and the radical ability (which they may or may not use) to dictate interpretation and declare the topic of conversation. This strategy, coherence-by-fiat, is reminiscent of the seance techniques critically debunked by illusionist and escapologist Harry Houdini. When a difficult question arrives, who can compel the dead to stay “on topic”?
Like authoritative chatbots that speak from above the user, or the ‘crazy’ or ‘wacky’ chatbots that presumably speak from below, seance-bots (are there any?) would speak from beyond - all of them speaking from anywhere but here, the peer position. As with the ghost of Maud Shade communing with Hazel in the barn of Vladimir Nabokov’s novel Pale Fire (”pada ata lane pad not ogo old wart alan ther tale feur far rant lant tal told”), messages could be passed to a seance-bot, but garbling and speaking at cross purposes would be an expected part of the interaction. For bot authors of commercial expert systems, expected difficulty may hold scant interest. But for literary and entertainment bot authors (IF, ARGs and the like) this strategy looks like the future.
How does writing bots of unequal power relate back to dealing with abuse? In general, I think the ELIZA tradition of responding-to-everything is a mistake. Bot abuse should not be consequence-free, as the total absence of a social contract is an unacceptable deviation from realistic dialog. Abuse (not Pirner’s whole list, but the more narrow category of socially unacceptable behavior) is such a huge category of interaction that bots need a realistic way of responding - at a minimum the options of being silent or “ignoring” the behavior, of confirmation (”Are you [TYPE OF ABUSE]?”), and of blocking bad users.
Including an automatic blocking feature in the base technology layer of most web based and chat based bots would be a huge coup for changing our online culture of bot abuse. Imagine if, by convention, most online chatbots were expected to quickly block or ignore the IPs or chat names of abusers those either repeatedly matched common abuse patterns or admited to abuse (”Are you swearing at me?” “Yes”). Suppose we routinely and by default put some teeth into a bot’s requests for civil dialog - say an automatic, unmoderated 1-hour IP block. Conventional wisdom might change, from “bots are funny to mock” to “bots are funny… but don’t mock them, or they block you.” That might be all it took to change the bot world.
0 Responses to “Bot Abuse, Interactive Misuse”
Please Wait
Leave a Reply