This demo, created in January 2011 shows--

1. TOUCH. How events on the surface layer (e.g. touching, dragging) drive an interactive video
experience underneath.
2. EXTERNAL EVENTS. How external inputs (emails, IM’s) can drive an interactive video
3. VOICE. How an interactive video application can respond to a user’s voice through the
4. INTEGRATION with YouTube video player.

The demo application contains around 200 video clips.
It was shot in 3 hours in front of green screen with 2 cameras.
The shoot is towards the end of project development. Conceptual design, planning and coding comes

Click on various parts of my face and I will respond (e.g. cursor up my nostril.)
Click on the flag to change language.
Click and drag the logo and drop it in my mouth.
Depending on user movement, the mouse may pass over several buttons or zones in a second,
triggering many events. We have limited events to 1 per second to avoid confusion but that
adds a little latency to the application.
We have script tags (not shown) that can also detect directional movement and hovering.

Deployment Ideas:
- Step-by-step instructional appliance or car repairs.
- Payoff desired consumer reactions in video e.g. braking early, using a seat belt, eating a
candy bar.
- Drag ingredients onto pizza.

Email the video a color and I will name a movie with that color in its title.
This should also work by IM-ing imfrixxer on any IM network and sending the 6-digit ID to
attach, then the color.
Shows how to sync a video experience to external events e.g. email, SMS, IM. Every browser
gets a unique ID number that serves as its address for messaging.
Email the letters sn just sn – as an abbreviation for ‘something nasty’ to show how the video
copes with obscenity (you can also email the obscenities themselves to the video but I know
you won’t do that!)
Click the forward button to see a running total as you go (shows "memory".)
Notice the volume of the music rises and falls (on a separate track) than the video, depending
on events in the script.
Keeps track of how many times each color has been sent in within a session. For between
sessions we can use Flash cookies, or browser cookies using Javascript calls.
Scripts can make http calls that return values to the application (e.g. logging activity, database
calls, support functions [such as determing a color sent from text].)
- Set up and pay off the receipt of an inbound email to capture user email addresses.
- On a screen in a public place have the video play out according to group voting in real-time.
- Unlock content if users send a premium SMS.

"Voice buttons” which analyze short bursts (1-10 seconds) of user voice.
The demo coffee choices are—
Soy, half in half, whole milk, black.
Splenda, equal, regular sugar, no sugar, one, two, five, ten.
I will remember what you made for me last time.
Make sure your microphone is not set too high.
We can also allow consumers to telephone the video experience in the browser (there are cost
implications to this.) Using a separate plug-in we can allow in-browser real-time telephony sync.

Deployment Ideas:
- Highly personal and engaging style.
- Virtual hosts and concierge that can converse as they show product info.
- Get consumers to speak the brand name or a slogan and check accuracy.

Helps test connectivity is OK and there are no firewall issues that affect ability to use speech/
voice function.
Only appears once. Once used, it vanishes.

The Chromeless YouTube player has been integrated and is loaded by a custom XML tag. The
YouTube player has a little latency (black screen at the beginning of each video.) We do some
behind-the-scenes footwork to manage that.
We do not use Flash cookies. We use html cookies instead.
With the exception of microphone-input (which Adobe is figuring out with Android) this application works on Flash 10.2-enabled mobile devices e.g. Motorola's Xoom.
We have a light pre-loader (25k) that can load politely, wait for user interaction
(mouse-over, click) then load and play the main interactive video experience. The main
experience usually starts within 1 second of loading (depending on net connectivity speed.)
The player remains separate from the script. Anyone can write a script in standard XML format
(our tags are described at then run them using our SWF. The XML
can be encoded for privacy. The XML can be embedded in the SWF for faster runtime.
For real-time connectivity we use a proprietary method of standard http calls on port 80
combined with long-polling. This should work fine with most firewalls.
For voice buttons we land RTMP streams on a custom server we built. Networks behind high
security may not allow this. We can detect when we're blocked.
Our code is compiled to Flash 10. 98% of PCs have that.
The player, videos and images can all be served from any server. The only component that we
need to host is the external event sync technology. We have our own approved shortcode (443
443) on all major carriers. We have the address imfrixxer on the major IM platforms.
Any data can be logged using an http call. We will also build tags to support API calls for
partners (e.g. DoubleClick.)
There are 3 video layers in the application. A surface layer that supports Google VP6
transparency, the regular player, the YouTube player. Each layer has two players to smooth
video transitions.

We are using Lumenvox ( speech software on our servers to do voice recognition/"voice buttons."

Check the event log for this video at

Posted April 26-11