Mistaken for Machine: Generated vs. Authored Text
Published by Jeremy Douglass February 17th, 2005 in Uncategorized, generators, Poetics, HAI.To take a turn for the Turing - when we become convinced that human writings were machine generated, what does that mean for us socially or artistically? Consider this post:
“Bondage-chICK” is a young, disturbed girl (she’s a “cutter” who cuts herself to feel better), who stumbled across a free developers’ journal service offered by perl.org and started using it for her personal diary. No problem, except that the developers who stumbled across her journal assumed that she was a perl bot…
- ?Disturbed diarist using perl blog site mistaken for bot.? (Boing Boing, July 30, 2004)
The commentator mistook (or pretended to mistake) bondage-chICK’s blog for the output of a program, responding that the writing didn’t look convincingly human enough.
If this is autogenerated, you need to tweak it so that the doubled+transposed letter mistake doesn’t happen so often, and introduce some more naturalistic errors. If it’s not autogenerated, you’re really sad and about 18 months behind the curve.
The mistake may have partly been due to the social context, as perl.org is a website dedicated to programming in a powerful scripting language. It may also have been due style, particularly to similarity of the prose with the output of popular parodies such as Brunching Shuttlecock’s Apathetic Journal Entry Generator, which had appeared about 18 months before - around April, 2003.
The author later moved, setting up a livejournal account as afi_atreyufan and then branching out more recently to a myspace account as andrea. It seems clear that this person is the same blogger, and that she is neither a perl bot nor the author of a perl bot. What interest me is not just the existential angst that Turing Test misfires engender, but the practical lessons. How does one writes like a bot - how does one avoid it - and what does the potential for confusion do to us when we are writers in a social space such as a weblog or IM? These spaces are also Turing Test-friendly environments.
It seems to me that there might be an emerging cycle and even synergy between chatbots and blog/IM style. Bots excel when asked to imitate short, chunk-style, heavily stylized text, which is where most IM conversations and many blogs tend. At the same time, the presence of scripting systems already integrated into many webservers and chat clients means that more scripting experiments such as bots tend to happen there - more so than say email or SMS. While bots are doubtless the parasite in this emerging configuration of text styles (constantly being programmed to immitate the latest 1337-speak etc.), might it be symbiotic? And might people’s need to distinguish their human-ness from scripting be part of what drives dynamic change in text style?
Interesting situation. Poor girl. Isn’t that the worst insult?: “Your thoughts are akin to that of a freely-downloadable simple-reflex program.” But really, how many people do react to everyday events with robot precision? Many. That is one of the appeals of bots for me: they perform the self-obsessed unchanging one-dimensional dependant person so well (among other characters). A character from the code out. Simple responses to life and random, unclear and disjointed thoughts are not the domain of bots alone. So, back to your pertinent questions:
I don’t think this is a problem for IM since the ping factor ? users can tell if a bot is online because of the amount of ?pings? [there’s a paper on this but I can’t recall the title, author or find it right now?].
The question of interest to me as a bot master is that of ?how does one avoid writing like a bot??. By ?writing? I don’t mean the natural language in isolation but also the construction of the pattern-matching or whatever system I employ. A fascinating paper by Phoebe Sengers parallels current agent behaviour with that of schizophrenic patients. Agents, she argues, exhibit the ?fragmented, depersonalized behaviour? (427). The ?fundamental problem for both schizophrenic patients and agents is that observers have difficulty understanding them narratively? (ibid.). Sengers blames this agent state on the methodologies employed by designers:
In order to facilitate comprehension then Sengers proposes ?socially situated AI?: where the social and cultural environment of the agent are design factors. She outlines three principles that distinguish this approach from others:
(430)
Sengers has developed therefore a “narrative agent architecture” based on narrative psychology and AI theory, the Expressivator, to develop the architecture to actuate these principles. They are explained as:
So, just as you observed that “Bondage-chICK” was inappropriate for the perl programmer environment so too I must say would a number of other non-perl people. I guess any of us could be a bot in the wrong environment. Is there a place where bots are real then?!
Sengers, P. (2002) ‘Schizophrenia and Narrative in Artificial Agents‘ in Leonardo, Vol. 35, 4, pp:427-431 [Online] Available at: http://muse.uq.edu.au.mate.lib.unimelb.edu.au/journals/leonardo/v035/35.4sengers.pdf [subscription only]
#! /usr/bin/dreamaul
#[dream. exe.cut(my.cog.knitive.veins)ion. knot.via. the cere(s).bellum but by ur.(7)hell
#this dream can.knot.be.closed
_de.b[r]ug[er], pls_
–
_______________________________________________________
Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single ASoC_Load
_______________________________________________________
>Pique::Single ASoC_Load_
_
_
Top Document: Maulage- F[D]re[am]quently Asked Questions
[ Cognitive_vein_dream_imagery F[D]AQs | Search | Web FAQs | Documents | Index ]
———————————————————————————————–
_How.do.I.Remove.DreamSubConscEmbed??_
———————————————————————————————–
1.1) _how.du.i.remove.dreamSubConscEmbed??_
Figure [in+blood.x-plosive].this.out[riddle.me.with.[l]it[erary]chy.this]
Some.ebony.linear.ash.h[m]ea[ning]d.imagery.will.stick
Rash.urself.[with]in.buttered_an[g.lee]xiety
Way.2.vividly.named.2.file.under.>>stop_that!file