Extending getUserMedia With Canvas
By Andi Smith - Sunday, July 1 2012
Following on from my previous post which introduced us to getUserMedia, I wanted to share two ways you can extend getUserMedia’s video capture using my good friend, the canvas element.
What we’ll cover in this post:
- Photo Booth - Using canvas to take screenshots.
- Effects Studio - Using canvas to apply different effects to the webcam feed.
Now we know what we’re doing and we’ve given them funky names, let’s begin!
Extending with Canvas
If you’ve not come across the canvas element before, it is used to render a bitmap image to the user’s screen via JavaScript. Unlike SVG, it has no DOM; and it also has no history of previous renders.
Our first step is to add a canvas element to our page.
<canvas id="photo"></canvas>
Photo Booth
To take screenshots, we will need to add a button to our page; and attach an event.
The following function creates a button programatically and attaches an event
to call the function takePhoto()
(which we’ve yet to write) when the user
clicks the button.
function setupPhotoBooth() {
var photoButton = document.createElement('button');
photoButton.innerText = 'Smile!';
photoButton.addEventListener('click', takePhoto, true);
document.body.appendChild(photoButton);
}
Taking our Photo
Taking the photo isn’t too difficult. First, we need to set our photo canvas element to have the same width and height as our video otherwise our picture will be squished into canvas’ 300 x 150 default dimensions. If you are having difficulty getting video’s height and width, you may need to explicitly size the video element.
Next, we need to get the 2D context of the photo canvas. If you’re unfamiliar with how canvas works, the context is like an artist - it will get and set our image data.
Finally, we call drawImage
with the parameters of our video element, and the area we wish to paint in.
function takePhoto() {
var video = document.querySelector('video');
var photo = document.getElementById('photo');
var context = photo.getContext('2d');
// set our canvas to the same size as our video
photo.width = video.width;
photo.height = video.height;
context.drawImage(video, 0, 0, photo.width, photo.height);
}
Running the page in a browser that supports getUserMedia, you should now be able to take a photo. Unlike me, you may want to sort out your hair before hitting the “Smile!” button.
The view on the left is our feed, the one on the right is our picture.
Saving our Image
If you would like to extend your photo booth to save the image being taken, you can add a save button to your page and hook up an event to save the photo. To trigger a download, we need to change the MIME type to image/octet-stream like below.
function savePhoto() {
var data = photo.toDataURL("image/png");
data = data.replace("image/png","image/octet-stream");
document.location.href = data;
}
This will save the image as a PNG. If you wish to save as other image formats, take a look at Jacob Siedelin’s Canvas2Image library.
If you’d like to look at and play with the photo booth web cam demo, I’ve uploaded a demo to Github. Don’t forget, you’ll need a browser that supports getUserMedia to use your webcam.
You can view and fork the code on Github.
Effects Studio
One of the first things I ever remember using on the Mac was PhotoBooth, where you can adjust your image to make a variety of different effects - such as changing the colours or saturation. Let’s have some fun with our webcam feed.
We could use CSS filters to add effects to our images.. this would be a perfectly valid way of adding effects, except if we saved the image, our effect would not be applied. So let’s take the more complicated route.
As we want to manipulate our live feed and we cannot change the video feed directly, the first thing we need to do is change our video feed to show in a canvas element. We’ve already covered how to grab a video image and draw it in the canvas, but this time we are going to need to continually refresh the canvas element to keep a live feed.
Previously in JavaScript, timed events could be achieved with setInterval
or setTimeout
; but more recently requestAnimationFrame
has been introduced.
requestAnimationFrame
allows the browser to synchronise our animation with the browser paint cycle and won’t run the animation while the tab is not active. Sadly, it’s another case of vendor prefixing for requestAnimationFrame
so we need another shim (Paul Irish goes in to more detail about this shim on his blog).
window.requestAnimationFrame ||
(window.requestAnimationFrame = window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame ||
window.oRequestAnimationFrame ||
window.msRequestAnimationFrame ||
function( callback ){
window.setTimeout(callback, 1000 / 60);
});
requestAnimationFrame works in the same way as setTimeout - it is only called once and we are required to call it each time we want to use it. We fallback to setTimeout for browsers that don’t support it (which shouldn’t be any of the browsers that support getUserMedia, but it’s best to be safe.
Now, we need to add the additional canvas element to our page:
<canvas id="feed"></canvas>
We need to create a new function to stream the feed. Using the same method we used to take a photo, we draw an image to the feed context. This time round we also make the function call itself via requestAnimationFrame.
function streamFeed() {
var context = feed.getContext('2d');
requestAnimationFrame(streamFeed);
context.drawImage(video, 0, 0, feed.width, feed.height);
}
The final thing to do to get our stream working is to add a call to initiate streamFeed()
within our getUserMedia success function (called onSuccess
in our previous post).
Although our stream will now work, we don’t want both the video and canvas feed to appear at the same time, so we need to hide our video.
video.style.display = 'none';
Separating the Raw Feed from Display
We’ve finished doing the setup work, now it’s time to add some effects. To add effects, we are going to want to add another canvas element called display. Our feed will provide the image data, and our display will show the finished result. Let’s add the element, hide our feed, and also give our feed and display canvas elements a width and height to avoid the default canvas dimensions when we are manipulating data.
feed.style.display = 'none';
feed.width = 640;
feed.height = 480;
display.width = 640;
display.height = 480;
Effects Groundwork
We’ll store our applied effects in an options array, but we need a way to allow the user to trigger our effects. The following function creates an effects button for each of our effects. The effects we are going to add are: “invert”, “red”, “blue” and “green”.
function setupEffectsButtons() {
var effects = ["invert","red","blue","green"];
var effectButton;
for (var i=0, l=effects.length; i < l; i++) {
effectButton = document.createElement('button');
effectButton.id = effects[i];
effectButton.innerText = effects[i];
effectButton.addEventListener('click', toggleEffect, true);
document.body.appendChild(effectButton);
}
}
These buttons trigger the toggleEffect
function which adds and removes items from our options array when the user clicks the button. The actual effects will be added during our animation loop, which we will cover afterwards.
function toggleEffect(e) {
var effect = this.id;
if (options.indexOf(effect) > -1) {
options.splice(options.indexOf(effect), 1);
} else {
options.push(effect);
}
}
Now we have a way for the user to add and remove effects, it’s time to manipulate our feed image data!
Changing the Stream Feed
We need to alter our stream feed function to include a call to the function
that will add our effects (addEffects()
).
First we need to draw our video to the feed canvas so we can extract the image
data. Next, we call getImageData
to extract the image data. This is what we
will pass through to addEffects()
. Finally, we will output the returned
image data on to our display context.
function streamFeed() {
var feedContext = feed.getContext('2d');
var displayContext = display.getContext('2d');
var imageData;
requestAnimationFrame(streamFeed);
feedContext.drawImage(video, 0, 0, display.width, display.height);
imageData = feedContext.getImageData(0, 0, display.width, display.height);
imageData = addEffects(imageData);
displayContext.putImageData(imageData, 0, 0);
}
Adding Effects
Our addEffects()
function loops through our options and applies each of them
to our image data. First, let’s write the skeleton.
function addEffects(imageData) {
var data = imageData.data;
var type = {};
for (var i = 0; i < options.length; i++) {
type = options[i];
for (var j = 0; j < data.length; j += 4) {
switch (type) {
case "invert":
// code to go here
break;
case "red":
// code to go here
break;
case "blue":
// code to go here
break;
case "green":
// code to go here
break;
default:
break;
}
}
}
return imageData;
}
For each option, we will need to loop through the image data. Each pixel in our image data is represented by four numeric values - the first is red (from 0 to 255), the second blue, the third green and the fourth is alpha - known as RGBA. Therefore our loop will increment by 4 each time to reach the next pixel.
After checking our page still works, it’s time to add our effects. The effects
for red, blue and green are created in a similar fashion. For each effect, we
increase the corresponding RGB value and decrease the other colour values.
Using Math.min
, we can choose whether it should be twice the value from the
feed data, or 255 - whichever is lower. The following code is for red. The
effects for green and blue are the same formula, but on the other related
values in the array:
case "red":
data[j] = Math.min(255,data[j] * 2);
data[j + 1] = data[j + 1] / 2;
data[j + 2] = data[j + 2] / 2;
break;
Finally, let’s add our code for invert. Invert is a straightforward effect to create - we just invert the pixel value by subtracting the value from 255, like so:
case "invert":
data[j] = 255 - data[j];
data[j + 1] = 255 - data[j + 1];
data[j + 2] = 255 - data[j + 2];
break;
And… Done!
Now we’ve created our effects, we can now apply effects to our webcam feed.
Multiple effects can be applied by pressing multiple buttons. If we wish to
apply our effects to our photo snaps, we need to change our photo booth
takePhoto()
code to look at the display canvas rather than the feed canvas.
If you’d like to look at and play with the effects web cam demo, I’ve uploaded a demo to Github. Don’t forget, you’ll need a browser that supports getUserMedia to use your webcam.
You can view and fork the code on Github.
If you create effects code for other different effects, feel free to share it below!
Andi Smith is a web developer from London, United Kingdom. He likes to build highly performant websites which innovate with genuine value.