- Read Kinect RGB and Depth camera images into OpenCV (ref)
- Convert those to RGBD point clouds
- Convert those into coloured 3d mesh surfaces (see PCL Point Cloud Library, http://www.pointclouds.org/documentation/tutorials/, http://www.cs.berkeley.edu/~jrs/meshs08/, http://www.cfd-online.com/Wiki/Mesh_generation, )
- Optimize those using Linear, Butterfly, and Kobbelt √3 subdivision schemes (see Elec 486 course pack, and possibly 1, 2 )
- Write a 3-dimensional audio reverberation/echo simulator (see 1, 2, 3)
- which takes as inputs:
- a set of audio recordings of songs and of individual notes
- a set of x,y,z locations for these recordings to emanate from
- an event notification whenever the user moves their hand over a (virtual) audio source location, so that the audio can be paused/resumed or modified to give the effect of virtual reality musical instruments
- Document everything, and submit steps 3 and 4 for inclusion in the OpenCV library which currently has those functions sitting on its to-do list.
On a semi-related note, about a year and a half ago I was looking at contributing to the Blender project. I didn't contribute anything, but I did study the code base a bit and I think I might be able to handle importing the mesh object and a skeleton (as provided by OpenCV or OpenNI) into blender for animation purposes. I'm not sure how I would select points to attach the mesh to the skeleton, however. Inspired by 1, 2.
http://kinect.dashhacks.com/kinect-guides/2011/02/16/import-kinect-data-blender-video-tutorial
http://kinect.dashhacks.com/kinect-guides/2011/02/16/import-kinect-data-blender-video-tutorial
No comments:
Post a Comment