[…] Already the site claims 21400 CyberTwins online, including my own (Coach). The site also includes 6 Celebrity Cybertwins: George, Angelina, Paris, Bart, Perfect Boyfriend, and Perfect Girlfriend. Almost 600 Cybertwins are “Commercial,” althought I’m not sure how commercial the “Tommy the Silent” chatbot is, representing presumably the South Dakota folk singer. The system is gaining users, although, it has a long way to go to catch up to the reported 60,000+ ALICE users. MyCyberTwin was built by John Zakos (The Code: programmer) and Liesl Capper (The Operating System: venture capitalist) of RelevanceNow! ! According to Zakos’ bio, his previous work brought him into the world of “contextual advertising technology.” Perhaps not coincidentally, of course, chatbots have had a bit of Second Life in advertisements (see subservient chicken) and, well, Second Life. Capper holds a BA in psychology and a “graduate certificate in venture development,” which suggests she could help ELIZA with her IPO. The team hopes to capitalize on the botmasters in all of us. MyCyberTwin asks you to put your money where its mouth is, as it limits the number (500) of free exchanges each month. Here we see signs that the 2.0 technology is already set for that Web 3.0 turn towards cashing in on the Beta apps. My Cybertwin versus ALICE […]]]>
I don’t understand how this can encourage using this as a search engine when all I get is abused for my inane choices and for taking too long! It looks slick, sure, which is great, but I don’t want to be abused while I’m working!]]>
Excellent site, Jeremy. I agree with your assessment of the human computer interface usability aspect.
Like many embodied conversational agents, she reads as having a non-Caucasian race (the actress is of Indian and Dutch descent, according to the wonderful wikipedia.) and dressed as she is in her sleek black outfit, she is played by Janina Gavankar from the L-Word. This exemplifies my argument in my dissertation about the tendency to build sexualized chatbot “others.”]]>
It looks like Microsoft has rolled the basic text-parser-with-video-matching formula into its search engine interface “Ms. Dewey” (via The Star Tribune). The use of fade techniques and blue-screen creates an impressive prototype and a far more polished experience than the rough-cut viral-marketing look in Subservient Chicken.
Still, the design concept is a bit mystifying, even as a novelty. Ms. Dewey snarks, smirks, vamps, and waxes impatient in a constant stream of high quality video clips - in other words, she does her level best to distract from and derail anything resembling a search process, which occurs in the form of 2-4 partly visible results in a little side window, and a list that can be awkwardly navigated through mouse hovering. If there is a killer application for the command-line-and-video-clip genre, this isn’t it.]]>
There is a few issues that both of you have brought up:
My general observation is that, by escaping written language, the ad effectively creates the illusion that much more sense is being made than is actually the case.
This is a good point. But I’m wondering if it is an illusion of sense being made or a diversion away from the expectation of it making sense? You’re probably familiar with Masahiro Mori’s theory of the ‘uncanny valley’ that explains how when an avatar or robot is actually like a human there comes a point when it will fail to be real and so the user is suddenly repelled by the ‘uncanniness’ of the, now, creature. This is often brought up in the debate about ‘believable characters’ and the reason why many practitioners/researchers, such as Ruth Aylett, choose to use cartoon-like characters. A person in a chicken-suit is pretty abstract and puts the user into a whole other paradigm of expectation.
If we go back to the ‘illusion of sense’ then we’ve got to ask what is different to facilitate this impression? An image? The lack of text as you suggest? I presume you mean that text output has the propensity of showing gross deviations in grammar and meaning. Whereas the chicken process of a limited amount of input options with only one default (that of the chicken moving to the camera and waving it’s finger in disapproval - well this is the only one I found) guarantees that every input will be addressed with a response that makes sense. But that is because the sphere of possibility has been limited, the rules of interaction clear and the benefits of adhering to the rules great. This, I believe, can be achieved with text. But many bots are created with the intention to be capable to capture every input as a human would — which is what your point about ‘chatbots and IF characters take on much more - and their failures are more evident for it’ recognises. Not all botmasters though.
I think on the one hand there is a need, because the natural-language processing technology is not ?there? yet, to define the bounds of the interaction. What will work and what won?t. Indeed, this is the basis of any interactive program ? constraints need to be set. So, I see part of the success of the sense of the chicken has been because of the limited input/output options and the movement away from written text. But the later, the movement away from text, is not because of the ?failures? you so sweepingly designate to chatbots and IF characters (poor NLP) but because of the power of the WORD. You know: the effect a typed word has on a person ? the direct cable to imagination, almost unmediated communication?the star of this blog. Typed text over the Internet also has close ties with what real people do. Text then is like a humanoid and so suffers the same ?uncanny? fate.
But as Mark observes, it is refreshing to have something jump around and give us an aesthetic other than the cursor-following avatars. They are/have rapidly lost their impact within seconds of use unfortunately. Although I think it?s wonderfully exciting to have your text read out by an embodied bot with a flick of a free program ? they seem to have fallen under the same curse as PowerPoint.
Mark finishes with a good question:
The question is: What can text-only chabot makers learn from the bot.
I think I?ve addressed this question somewhat, and this post has helped me to clarify the point. Bots that are enjoyable to me are ones that I can GET TO DO things. The bot then becomes a generator of sorts. Indeed, what interactive work doesn?t function under the system of generation? The chicken shows text-based botmasters that users want bots to jump hoops, or at least have the option to tell them to. The jumping hoops can be outside of the storyworld and often are: like telling me how I can get a transcript of the conversation, change skins, be told a joke or a story. Alan is a great example of this proactive interaction. Funnily, I become ?closer? to a bot when these things occur and feel the pressure of willing-the-bot-into-life floats away. If bots are to be ?real? then they would know how to do cool things with interfaces, with computers, they would teach us how to hack the server whilst their botmaster is offline.]]>