Simulation and Transfer Learning for Deep 3D Geometric Data Analysis

PI: Justin Solomon, Department of Electrical Engineering and Computer Science, MIT
PI: Evgeny Burnaev, Skoltech Center for Computational Data-Intensive Science and Engineering, Skoltech

Every field of human activity is undergoing a “data revolution.” Old business models are reformulated as data science and machine learning problems. The rising complexity of models and demand for insight require data analysis tools applicable to more than 2D images. Instead, future machine perception systems will need 3D data processing. Diverse applications including analysis of human movements, faces, MRI, CT scans and other biomedical images, shelf placement in retail, remote sensing data, automatic reconstruction of 3D shapes from imaging data (e.g. stereo photography, multiview photography, and LIDAR) and many more applications require advanced 3D data analysis for understanding and Semantic modeling. The success of 3D methods largely has been hampered by the lack of labeled datasets suitable for machine learning as well as by the limited ability of existing neural network architectures capable of efficiently processing 3D data in different formats (point clouds, meshes). The goal of this project is to address both aspects of the problem. We will develop new deep learning architectures adapted to 3D data and approaches to training such models on synthetic and human-made 2D and 3D labeled geometric data. We will explore transfer learning to address the reality gap between “clean: artificial data and noisy scans of real environment. Applications areas include --- but are not limited to --- 3D CAD model retrieval and classification, generation of 3D models from 2D line drawings, object recognition in remote sensing, and 3D MRI medical data segmentation and classification. 

Back to the list >>