Corporal Interactivo

Untitled.png

For the past couple weeks, I have been participating as a resident artist in Interactive Media at the Laboratorio Interactividad Corporal (LIC) in Buenos Aires, Argentina in collaboration with the UCLA research lab, REMAP. We have been working on bootstrapping technology for an experimental theater piece as well as conducting a series of motion tracking experiments for use in live performance.

IMG_1328.jpg

The piece is a solo movement theater work for performer, camera, and machine learning. It’s story follows a woman, who confronts the loss of her ability to form memories along with a new, ability to select, capture and recall events around her.

As part of this research effort, I have been tasked with creating the “associations” that the main character uses to trigger memories. The associations can be thought of a collage of memories that are captured using a live feed camera that a performer wears onstage. As part of our machine learning experiments, I have been working with another programmer to select, create and implement different looks for style transfer.

This is an example of the rapid style transfer using the Deep Art web uploader:

It's a real quickly and fun way to test style transfer.

Part of my effort for our project has been a mix of selecting the “styles” for transfer, and for implementing a pipeline for image segmentation, masking, styling, and then instancing the images. Below is a screenshot of one of our initial tests using the style transfer without the image segmentation.

diagram of the pipeline:

Screenshot 2018-03-12 03.40.18.png

Original Image

The original image was captured from a live camera feed that the performer wore on their shoulder. The image would be captured once the performer was still and the view in their camera was still using optical flow.

0_original.jpg

Optical Flow Triggers made by Matthew Ragan

Image Segmentation

This image was then sent to a server to be segmented based on what objects were detected. Below is a chart with the key value pairs (ID and color). See Peter Gusev's Semantic Image Segmentation Web Service for documentation and readme,

seg_table.png

Chroma key

These individual colors were then chroma keyed to create individual, unique masks.

Composite

The masks were then composited back with the original image.

Style Transfer

These masked composites were sent to a server to be styled with pre-trained images.

See Peter Gusev's TensorFlow CNN for fast style transfer!

Final Mask

This style was then re-masked for the collage instancing and projection.

5_style_mask.jpg

Collage & Projection

The instanced and collaged styles were then projected back into the space whenever an association was "triggered." These triggers occurred whenever the actress was still for at least 2 seconds using optical flow. This trigger would begin the style transfer pipeline as described above, and project back in the space on the cubes.

IMG_1422.jpg

In addition to this endeavor, we made some progress using Yolo's Realtime Object Detection, OpenFace, and made progress on OpenPTrack's OpenMoves integration. More photos of that soon!

 TouchDesigner interfacce for OpenPTrack OpenMoves clustering! Thanks to Matthew Ragan & Peter Gusev

TouchDesigner interfacce for OpenPTrack OpenMoves clustering! Thanks to Matthew Ragan & Peter Gusev

 I also made some fun progress with Hue Lights controlled via TouchDesigner, but more on that later!

I also made some fun progress with Hue Lights controlled via TouchDesigner, but more on that later!