Accuracy of Linear Measurements using Google Maps in FARO Zone 3D for Forensic Reconstruction of Outdoor Scenes
Eugene Liscio, Quan Le, Ryan Rider, & Tilo Voitel
Abstract: FARO Zone 3D (FZ3D) is a relatively new software package designed for crash reconstructionists and crime scene investigators. The program has a feature that allows scaled Google Maps images to be imported into its workspace such that satellite imagery of an area may be measured, traced, or serve as a reference for 3D modeling or animations. This is useful when a site inspection cannot be carried out but is still necessary for reconstructions. An understanding of the accuracy of this method provides reconstructionists and investigators knowledge regarding its accuracy when utilizing FZ3D. This study compares linear measurements taken using Google Maps images with those done with a terrestrial laser scanner. A total of 800 measurements were taken across 8 roadways in North America by 10 participants. The average percentage difference between all laser scanner measurements and participant measurements was 0.894%. The mean absolute error (MAE) for all the data was determined to be 0.350 m (1.148 ft), with a standard deviation (SD) of 0.296 m (0.971 ft). Distance measurements less than 5 m resulted in a large average percentage difference of 8.286%. These findings are comparable to previous studies and are reasonable for the general mapping of large outdoor scenes.
Depth Mapping Accuracy Evaluation of the Microsoft Kinect v2 Motion Capture Sensor
Toby Terpstra, Tomas Owens, Alireza Hashemian, & Tilo Voitel
Abstract: Depth mapping sensors, such as those found in contemporary entertainment systems, can be used for recording three-dimensional human motion. This recorded motion can then be transported to a three-dimensional human model whereby creating 3D animation with accurate human motion. These sensors and recording software create a cost-effective alternative to more traditional motion capture studio solutions. In addition to recording human motion, the depth mapping of any captured motion as well as the surrounding environment can be recorded and exported as a point cloud. This paper assesses the accuracy of three-dimensional depth mapping data recorded using multiple Microsoft Kinect v2 sensors at three different sites. Point clouds exported from the Kinect v2 sensors depth mapping were extracted from single points in time and compared to LiDAR collected by a laser scanner. An average of 82% of the depth sensor data was found to be ±1 inch (2.5 cm) of the laser scanning data with a standard deviation of 1.5%, and an average of 93% was found to be within ±1.5 inches (3.8 cm) of the laser scanning data with a standard deviation of 1.1%.