To take a turn for the Turing - when we become convinced that human writings were machine generated, what does that mean for us socially or artistically? Consider this post:

“Bondage-chICK” is a young, disturbed girl (she’s a “cutter” who cuts herself to feel better), who stumbled across a free developersí journal service offered by and started using it for her personal diary. No problem, except that the developers who stumbled across her journal assumed that she was a perl bot…

- ?Disturbed diarist using perl blog site mistaken for bot.? (Boing Boing, July 30, 2004)

The commentator mistook (or pretended to mistake) bondage-chICK’s blog for the output of a program, responding that the writing didnít look convincingly human enough.

If this is autogenerated, you need to tweak it so that the doubled+transposed letter mistake doesnít happen so often, and introduce some more naturalistic errors. If itís not autogenerated, you’re really sad and about 18 months behind the curve.

The mistake may have partly been due to the social context, as is a website dedicated to programming in a powerful scripting language. It may also have been due style, particularly to similarity of the prose with the output of popular parodies such as Brunching Shuttlecock’s Apathetic Journal Entry Generator, which had appeared about 18 months before - around April, 2003.

The author later moved, setting up a livejournal account as afi_atreyufan and then branching out more recently to a myspace account as andrea. It seems clear that this person is the same blogger, and that she is neither a perl bot nor the author of a perl bot. What interest me is not just the existential angst that Turing Test misfires engender, but the practical lessons. How does one writes like a bot - how does one avoid it - and what does the potential for confusion do to us when we are writers in a social space such as a weblog or IM? These spaces are also Turing Test-friendly environments.

It seems to me that there might be an emerging cycle and even synergy between chatbots and blog/IM style. Bots excel when asked to imitate short, chunk-style, heavily stylized text, which is where most IM conversations and many blogs tend. At the same time, the presence of scripting systems already integrated into many webservers and chat clients means that more scripting experiments such as bots tend to happen there - more so than say email or SMS. While bots are doubtless the parasite in this emerging configuration of text styles (constantly being programmed to immitate the latest 1337-speak etc.), might it be symbiotic? And might people’s need to distinguish their human-ness from scripting be part of what drives dynamic change in text style?

2 Responses to “Mistaken for Machine: Generated vs. Authored Text”

  1. 1 Christy Dena

    Interesting situation. Poor girl. Isn’t that the worst insult?: “Your thoughts are akin to that of a freely-downloadable simple-reflex program.” But really, how many people do react to everyday events with robot precision? Many. That is one of the appeals of bots for me: they perform the self-obsessed unchanging one-dimensional dependant person so well (among other characters). A character from the code out. Simple responses to life and random, unclear and disjointed thoughts are not the domain of bots alone. So, back to your pertinent questions:

    How does one writes like a bot - how does one avoid it?

    What does the potential for confusion do to us when we are writers in a social space such as a weblog or IM?

    I don’t think this is a problem for IM since the ping factor ? users can tell if a bot is online because of the amount of ?pings? [there’s a paper on this but I can’t recall the title, author or find it right now?].

    The question of interest to me as a bot master is that of ?how does one avoid writing like a bot??. By ?writing? I don’t mean the natural language in isolation but also the construction of the pattern-matching or whatever system I employ. A fascinating paper by Phoebe Sengers parallels current agent behaviour with that of schizophrenic patients. Agents, she argues, exhibit the ?fragmented, depersonalized behaviour? (427). The ?fundamental problem for both schizophrenic patients and agents is that observers have difficulty understanding them narratively? (ibid.). Sengers blames this agent state on the methodologies employed by designers:

    Generally speaking, an agent’s behavior consists of short dalliances in individual behaviors with abrupt changes between them. It is this overall defective nature of agent behavior, caused by underintegration of behavioral units, that I term ’schizophrenia’ and address here.

    In order to facilitate comprehension then Sengers proposes ?socially situated AI?: where the social and cultural environment of the agent are design factors. She outlines three principles that distinguish this approach from others:

    • 1. An agent can only be evaluated with respect to its environment, which includes not only the objects with which it interacts, but also the creators and observers of the agent.?
    • 2. An agent?s design should focus, not on the agent itself, but on the dynamics of that agent with respect to its physical and social environments.
    • 3. An agent is a representation. Artificial agents are a mirror of their creators? understanding of what it means to be at once mechanical and human, intelligent, alive?what cultural theorists call a subject?This means in a fundamental sense that, in our agents, we are not creating life but representing it, in ways that make sense to us, given our specific cultural backgrounds.


    Sengers cartoon from her paper

    Sengers has developed therefore a “narrative agent architecture” based on narrative psychology and AI theory, the Expressivator, to develop the architecture to actuate these principles. They are explained as:

    • 1. Context sensitivity and negotiability: [?]In order to design narratively expressive agents, designers must respect (rather than attempt to override) the context- and audience dependency of narrative comprehension.
    • 2. Intentional state entailment: [?]People do not want to know just the events that occur in the narrative, but also the motivations, thoughts and feelings behind them. Supporting narrative comprehension means communicating clearly not just what the agent does, but its reason for doing it.
    • 3. Diachronicity: [?]In a narrative, events do not happen randomly and independently; they are connected to and affect one another. The transitions previously mentioned work to knit behaviors into a coherent sequence.

    So, just as you observed that “Bondage-chICK” was inappropriate for the perl programmer environment so too I must say would a number of other non-perl people. I guess any of us could be a bot in the wrong environment. Is there a place where bots are real then?!

    Sengers, P. (2002) ‘Schizophrenia and Narrative in Artificial Agents‘ in Leonardo, Vol. 35, 4, pp:427-431 [Online] Available at: [subscription only]

  2. 2 #social_victim_meat#

    #! /usr/bin/dreamaul
    #[dream. exe.cut(my.cog.knitive.veins)ion. knot.via. the cere(s).bellum but by ur.(7)hell
    #this dream

    _de.b[r]ug[er], pls_


    Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single ASoC_Load

    >Pique::Single ASoC_Load_



    Top Document: Maulage- F[D]re[am]quently Asked Questions

    [ Cognitive_vein_dream_imagery F[D]AQs | Search | Web FAQs | Documents | Index ]



    1.1) _how.du.i.remove.dreamSubConscEmbed??_

    Figure [in+blood.x-plosive].this.out[[l]it[erary]chy.this]

Leave a Reply

thesis writing service