Animating in VR – Daydream Labs Style

Last night I got the chance to briefly try out the Daydream labs animation tool. As I started the demo, I mentioned that I was an animator, which I was told would both help and hinder me, because their prototype doesn’t work like any animation tool I might have tried before.

This was definitely true. The prototype I tried uses the HTC Vive. My right hand was the ‘animation’ tool, my left was the timeline. There was a box of items to play with – a cylinder, a dog, a plane (as in, flying plane, not geometric plane) an android droid. To start animating, you simply grab whichever item from the box that you’d like to animate, and then put it in the start position. Once you’re happy with where you want to start, you release it, and pick it up again, only now, the timeline is running, and you’re animating as you go – whatever path you move the object along, that’s the path that’s animated, shakes and wobbles and all. What’s interesting about this to me as an animator is that a lot of unconscious movement is built in – this works very similarly to a motion capture suit – you’re really recording the movement of your controller in space.

If you don’t like a particular moment of your animation, you can go back to there in the timeline, and re-record as much as you want, however from that point, the previous animation you’ve recorded will start wherever you stop – so if you end up with your toy in the air, but your previous animation had it jumping up and down on a table, now it will start jumping up and down in the air. The translations from the prior animation you didn’t record over will start from whatever your new zero point is – something that may be unintentional from the perspective of your user.

The tough thing to grasp initially as an animator was that I wasn’t setting keyframes and tweening between them, I was the tweening. Even when I grasped this, there were some challenges in making the animation I wanted to make. Since you have to hold down the trigger to pick up and animate your object, I felt sometimes like I was limited – I couldn’t turn my object around the way I wanted to smoothly, without sometimes having to stop and re-grab the object – e.g. if I wanted my plane to do a 360, well, my wrist isn’t going to be able to do that in one smooth motion, no matter how many times I try and re-record that section. I think for this to be a really useful toy or tool (it basically could go in either direction and be useful or fun for someone) it might make sense to let users record their animation once, and then go back and edit only specific attributes of the animation. For example, I could record my plane’s path without having to worry too much about it’s orientation all of the time, just trace out the loop I want in the air, and then go back, keep that information but just overwrite the rotation information, using two hands to let me smoothly rotate while the animation plays the translation. I think that would be useful – likewise, you could allow people to edit scale on the fly, maybe even using gizmos like those in Maya or other 3D animation tools to make it easier. Sure, for those people who just want to move a bunch of objects around in space, the existing toolset is fine, but I think even kids playing around would want a few more capabilities eventually.

The other thing that would be interesting to see is how to us a tool like this to approach animating rigged objects like a humanoid character. Maybe again, rather than trying to make this a mocap lite system, having the ability to set your base motion path, and then go back and move arms and legs how you want them to move, one at a time, would work. So rather than a ‘control’ for a specific body part, you just select the entire right arm, and use your two controllers intuitively – perhaps one is the ‘elbow’ and one is the ‘wrist’ so you’re moving the character around like a puppet or a doll.

Ultimately I think this kind of thing has a lot of potential to become the new way for animators to work – if you expand the tool set some, to allow things like slowing down the speed at which the timeline records, individual control over specific attributes, and individual control over separate aspects of one model in a way similar to animation layers, where each layer is additive, I think this would be much faster than traditional keyframe based animation. You can’t underestimate the intuitive nature of working with something in ‘reality’ and how much faster that would be, especially if you can use both hands as input for things like scale and rotation. I can also see this being a very kid-friendly creation toy, allowing kids to tell stories with a giant toybox, where you’re not limited to just how many toys you can move with your hands, and where you could potentially add things like particle effects – imagine blending this with Tiltbrush style effect brushes, for example.

I definitely enjoyed trying this out, and I hope to see this project move forward – though I’m not sure how many of the Daydream labs products will turn into real applications, I hope this is one of them. Regardless, I didn’t have quite as much fun as the guy who tried the demo after me – by the end of his time, I think he had animated somewhere in the region of 50 or 60 dogs bouncing around over the table in his scene, as he laughed and cackled gleefully. We can all hope for such reactions to our endeavors in VR.

Leave a Reply

Your email address will not be published.