Wednesday, May 4, 2011

Sensor Fusion Video


Demo of sensor fusion running on an iPad 2. The front of this building has been preprocessed to serve as a visual marker. The iPad's camera detects the image to get an initial estimate of where the user is standing and how the iPad is oriented in space. After that, the camera and the gyros/accelerometer in the iPad work together to keep the overlay aligned, even when the building goes out of view or isn't detected by the vision algorithm.

Right now it's not rending anything interesting-- the red-green-blue lines represent the x-y-z axis as calculated by the camera and sensors. The background grid is drawn as a large cube surrounding the user-- you can see the corners when the camera pans up and left. The white rectangle with the X in the center only shows up when the camera detects the building facade; as you can see, it isn't detecting the facade every frame, but it doesn't have to as the gyros provide plenty of readings to fill in between the camera estimates. As a result, the animation runs at a nice smooth 60fps.

Pipeline as of now: FAST corner detector - Ferns keypoint classifier - RANSAC homography estimator - Kalman filter (with CoreMotion attitude matrix) - OpenGL Modelview matrix

1 comment:

  1. Wow! Congratulations on your excellent results so far. Your ideas are very ambitious.

    I am working with OpenCV on iOS as well. I found that using SURF was too slow to be used in real time and so I have begun using FAST.

    I noticed that you use FERNS for your matching, I have had a look at this in OpenCV but are lost as to how to actually use it. I was wondering whether you would have any advice or code samples related to this.

    My email is aaron at two-bulls.com if you would like to contact me privately. Best of luck :)

    ReplyDelete