Emotional Picture Frame

Projection Mapping Programming

Creating a picture frame that changes photos based on the viewers emotion.

The humble picture frame is something that we all have in our homes. But…. Its old, how old? Let’s see! Wow! It’s been around for 1,700 years. Let's see if we can use some modern technologies such as projection mapping and artificial intelligence to add our own creative modern twist.

It's Old


Picture frames hold, well a picture and only one picture at a time. What if we could make a picture frame that can show multiple pictures! Wait……. Those have existed since the 90’s well then let's be a little more adventurous shall we? What if we could make a picture frame that changes pictures based on the emotion of the person viewing it. If someone is having a rough day and is feeling a little down we can show them some cute puppies to cheer them up.

The first challenge we need to overcome is how to determine the emotion of the person viewing the picture frame. The simplest way to do this would be to find yourself a friend…. Or just duplicate yourself if you don't have any. Then, whenever your emotion changes your friend can pick a photo to show you!

Watching Yourself


While this works, it is incredibly inefficient and has some drawbacks….. Like your friend falling asleep on the job or requesting payment for working for you not to mention it would be kind of creepy to have someone looking at  you all the time.

But don’t fear! We have technology! We could use an artificial intelligence algorithm to determine the viewer's emotion for us. All we have to do is gather thousands of dataset images of varying emotions, buy a super powerful computer, write way to many lines of code, let the computer suck energy for days and days, and bam! We would have a very mediocre emotion detection system. But, i've already tried that method in the past and it was not fun. So, in the spirit of software engineering let's just use someone else’s solution because we are too lazy to do it ourselves.

AWS Rekognition


Thankfully the helpful people over at Amazon Web Services have created a service called Rekognition, it spelled like recognition, but with a k since that sounds more trendy I guess. As we can see from the demo on their website all we need to do is provide an image and it will return data about the emotions of the person in the image.

analysis.jpg 185 KB

We don’t want to manually have to take and upload an image so let's write some code to automate that process. There we go, a simple script that captures a frame from the webcam feed and saves it to our hard drive. Next, we just need to use the Amazon Web Services API to send that photo to the Rekoginiton service, then we receive the result which, along with lots of other data contains the emotion. We simply need to determine which emotion Recognition is most confident about, then we can trigger the correct to be shown. But since we want this to run more than once to catch our every emotion change we need to throw it into a loop. But that would result in this happening. CHA CHING We would rack up a pretty hefty bill quickly since we are charged for each request. Thankfully, our emotions don’t change that often, or so we hope. therefore  we can limit our loop to run every minute.

//IMPORTS
const NodeWebcam = require("node-webcam");
const AWS = require('aws-sdk');
const osc = require("osc")

require('dotenv').config();
fs = require('fs');

//CREATE OSC UDP PORT AND OPEN IT
var udpPort = new osc.UDPPort({
    localAddress: "0.0.0.0",
    localPort: 57121,
    metadata: true
});
udpPort.open()

//INIT AWS SDK
var rekognition = new AWS.Rekognition({
    accessKeyId: process.env.accessKeyId,
    secretAccessKey: process.env.secretAccessKey,
    region: 'us-east-1'
})

//WEBCAM OPTIONS
const opts = {
    width: 1280,
    height: 720,
    quality: 100,
}

//MAIN LOOP
setInterval(() => {

    //TAKE PICTURE
    NodeWebcam.capture("img", opts, function (err, data) {
        console.log('Image Captured')

        //READ IMAGE THAT WAS JUST CAPTURED
        fs.readFile('img.jpg', (err, img) => {

            //UPLOAD IMAGE TO AWS
            rekognition.detectFaces({
                Image: {
                    Bytes: img
                },
                Attributes: [
                    "ALL"
                ]
            }, (err, result) => {
                if (err) {
                    console.log(err)
                }

                //ITERATE OVER EMOTIONS TO FIND ONE WITH HIGHEST CONFIDENCE
                const currentEmotion = {emotion: '', confidence: 0}
                result.FaceDetails[0].Emotions.forEach(emotion => {
                    if (emotion.Confidence > currentEmotion.confidence) {
                        currentEmotion.emotion = emotion.Type;
                        currentEmotion.confidence = emotion.Confidence
                    }
                })

                //CALL HELPER FUNCTION
                console.log(currentEmotion);
                playPicture(currentEmotion.emotion);

            })
        })
    });

//10 SECOND INTERVAL
}, 10000)


//OSC HELPER FUNCTION
function playPicture(emotion) {

    // TODO - Copy workspace id from QLab
    const workspaceId = '01C05A38-2EC9-49E9-A8F7-09864B9BCB13';

    //STOP ALL RUNNING CUES
    udpPort.send({
        address: `/workspace/${workspaceId}/panic`,
        args: []
    }, "localhost", 53000);

    //SELECT CORRECT EMOTION
    setTimeout(() => {
        udpPort.send({
            address: `/workspace/${workspaceId}/select/${emotion}`,
            args: []
        }, "localhost", 53000);
    }, 50)

    //RUN SELECTED EMOTION
    setTimeout(() => {
        udpPort.send({
            address: `/workspace/${workspaceId}/go`,
            args: []
        }, "localhost", 53000);
    }, 100)
}

Great! We have the AI portion done, now comes the fun part of making things look good. First, let's make our own picture frame. I decided to go with possibly the most minimalist design ever imagined, a square. I went ahead and created this design in Tinkercad and printed it out on my 3D printer, however, this turned out to be a little small. Since my build volume was limited I split up the frame into smaller parts and scaled them up so that the final dimensions of the frame would be 11” by 14”. But when I went to print it out I forgot to change the filimanet in my printer and managed to print a white frame….. Which does not particularly look like a picture frame at all. I then changed my filament back to black and re-printed it out and it looked much better.

Black Picture Frame


The last step is to get photos to show up on our newly made picture frame. To do this I am going to use a technique called projection mapping using QLab. If you need a refresher on QLab check out my previous post on it. After I got my projector setup and the output surface mapped to the picture frame it was time to get the images loaded into QLab. Lastly, we need to connect our facial recognition code to toggle through the photos we have in QLab. We will use a protocol called OSC which stands for Open Sound Control, a widely used communication standard in the world of live entertainment. Now, we just need to call some OSC commands that QLab understands and BAM! We are in business. 

Final Product