Toby Terpstra, Tomas Owens, Alireza Hashemian, & Tilo Voitel
Abstract: Depth mapping sensors, such as those found in contemporary entertainment systems, can be used for recording three-dimensional human motion. This recorded motion can then be transported to a three-dimensional human model whereby creating 3D animation with accurate human motion. These sensors and recording software create a cost-effective alternative to more traditional motion capture studio solutions. In addition to recording human motion, the depth mapping of any captured motion as well as the surrounding environment can be recorded and exported as a point cloud. This paper assesses the accuracy of three-dimensional depth mapping data recorded using multiple Microsoft Kinect v2 sensors at three different sites. Point clouds exported from the Kinect v2 sensors depth mapping were extracted from single points in time and compared to LiDAR collected by a laser scanner. An average of 82% of the depth sensor data was found to be ±1 inch (2.5 cm) of the laser scanning data with a standard deviation of 1.5%, and an average of 93% was found to be within ±1.5 inches (3.8 cm) of the laser scanning data with a standard deviation of 1.1%.