#[dream. exe.cut(my.cog.knitive.veins)ion. knot.via. the cere(s).bellum but by ur.(7)hell
#this dream can.knot.be.closed
Part1 - Part2 - Part3 - Part4 - Part5 - Part6 - Part7 - Single ASoC_Load
Top Document: Maulage- F[D]re[am]quently Asked Questions
[ Cognitive_vein_dream_imagery F[D]AQs | Search | Web FAQs | Documents | Index ]
Interesting situation. Poor girl. Isn’t that the worst insult?: “Your thoughts are akin to that of a freely-downloadable simple-reflex program.” But really, how many people do react to everyday events with robot precision? Many. That is one of the appeals of bots for me: they perform the self-obsessed unchanging one-dimensional dependant person so well (among other characters). A character from the code out. Simple responses to life and random, unclear and disjointed thoughts are not the domain of bots alone. So, back to your pertinent questions:
How does one writes like a bot - how does one avoid it?
What does the potential for confusion do to us when we are writers in a social space such as a weblog or IM?
I don’t think this is a problem for IM since the ping factor ? users can tell if a bot is online because of the amount of ?pings? [there’s a paper on this but I can’t recall the title, author or find it right now?].
The question of interest to me as a bot master is that of ?how does one avoid writing like a bot??. By ?writing? I don’t mean the natural language in isolation but also the construction of the pattern-matching or whatever system I employ. A fascinating paper by Phoebe Sengers parallels current agent behaviour with that of schizophrenic patients. Agents, she argues, exhibit the ?fragmented, depersonalized behaviour? (427). The ?fundamental problem for both schizophrenic patients and agents is that observers have difficulty understanding them narratively? (ibid.). Sengers blames this agent state on the methodologies employed by designers:
Generally speaking, an agent’s behavior consists of short dalliances in individual behaviors with abrupt changes between them. It is this overall defective nature of agent behavior, caused by underintegration of behavioral units, that I term ’schizophrenia’ and address here.
In order to facilitate comprehension then Sengers proposes ?socially situated AI?: where the social and cultural environment of the agent are design factors. She outlines three principles that distinguish this approach from others:
- 1. An agent can only be evaluated with respect to its environment, which includes not only the objects with which it interacts, but also the creators and observers of the agent.?
- 2. An agent?s design should focus, not on the agent itself, but on the dynamics of that agent with respect to its physical and social environments.
- 3. An agent is a representation. Artificial agents are a mirror of their creators? understanding of what it means to be at once mechanical and human, intelligent, alive?what cultural theorists call a subject?This means in a fundamental sense that, in our agents, we are not creating life but representing it, in ways that make sense to us, given our specific cultural backgrounds.
Sengers has developed therefore a “narrative agent architecture” based on narrative psychology and AI theory, the Expressivator, to develop the architecture to actuate these principles. They are explained as:
- 1. Context sensitivity and negotiability: [?]In order to design narratively expressive agents, designers must respect (rather than attempt to override) the context- and audience dependency of narrative comprehension.
- 2. Intentional state entailment: [?]People do not want to know just the events that occur in the narrative, but also the motivations, thoughts and feelings behind them. Supporting narrative comprehension means communicating clearly not just what the agent does, but its reason for doing it.
- 3. Diachronicity: [?]In a narrative, events do not happen randomly and independently; they are connected to and affect one another. The transitions previously mentioned work to knit behaviors into a coherent sequence.
So, just as you observed that “Bondage-chICK” was inappropriate for the perl programmer environment so too I must say would a number of other non-perl people. I guess any of us could be a bot in the wrong environment. Is there a place where bots are real then?!
Sengers, P. (2002) ‘Schizophrenia and Narrative in Artificial Agents‘ in Leonardo, Vol. 35, 4, pp:427-431 [Online] Available at: http://muse.uq.edu.au.mate.lib.unimelb.edu.au/journals/leonardo/v035/35.4sengers.pdf [subscription only]]]>