
I wouldn't waste my moneys on markerless system. I have only good things to say about their system, software and support.įrom what I have heard, OptiTrack lacks in technical specs compared to other systems, but are an more affordable choice. It was an optical system with 8 cams, capable of recording 3 actors at once (we didn't have more suits to play with).

I have worked with Qualisys motion capture system. It has a series of shapes that it expects your rig to make and you tell it what controls what to make those shapes. The beauty is that it's markerless, uses a kinect or kinect like sensor (go with primsense's camera) and you can use any kind of rig, bones, blendshapes, a convoluted system of controllers it doesn't really matter. Luckily their Maya plug-in rocks and gives you all of the advantages of the stand alone app right inside of Maya. Their stand alone app is great but as stand alone apps go you're exporting and importing out of your actual animation package which is a bit of a pain. But if you aren't using those twoįor facial motion capture I REALLY like Faceshift. If you have 3dsmax, biped and CAT make dumping animation onto your rig fairly painless and both have some pretty decent mocap clean up tools. It's good for cleaning up mocap but not so great to use as an animation rig.

If you have Maya I suggest taking advantage of HumanIK's re-targeting systems, it's like motion builder stuffed into Maya. It has some minor clean up and retargeting features but you really should use Motion Builder to get the ipisoft motion onto your rig. It also takes a while to calibrate, record and process the video and it doesn't always work the first time around but the more you work with it the better you get at working around its quirks.

It makes it possible to turn around without having it lose track of your arms or your leg. You can get around this by stetting up a second sensor and it helps fill in the shadow area where the single sensor just can't see. If you only have one sensor then anything that falls in your "shadow" it won't see. If you cross your arms on your chest the cloud becomes a big fused nightmare and it can't figure out what arm is doing what. But you do need to realize that it's a depth sensor, one that is generating a point cloud and then trying to figure out where your joints are. We use ipi Soft at work, its not prefect but you can get some really good results.
