The state-of-the-art approaches for monocular 3D reconstruction mainly focus on datasets with highly textured images. Most of these methods are trained on datasets like ShapeNet which render well-textured objects. However, in natural scenes, many objects are texture-less, making it difficult to reconstruct them. Unlike textured surfaces, reconstruction of texture-less surfaces has not received as much attention mainly because of a lack of large-scale annotated datasets. Some recent works have also focused on texture-less surfaces, many of which are trained on a small real-world dataset containing 26k images of 5 different texture-less clothing items. To facilitate further research in this direction, we present an extensible dataset generation framework for texture-less RGB-D data. We also make available a large dataset containing 364k images with corresponding groundtruth depth maps and surface normal maps. In addition to clothing items, our dataset includes images of other everyday objects, including animals, furniture, statues, vehicles, and other miscellaneous items. There are 2635 unique 3D models and 48 different objects in total, including 13 main objects from the ShapeNet dataset. Our framework also allows automatic generation of more data from any 3D model, including those obtained directly from the ShapeNet repository. This dataset will aid future research in reconstructing texture-less objects for a wide range of object categories.
|Category||Objects||# of Objects|
||13 x 200|
These models were obtained from several sources in the public domain, as listed in the following subsections.
Models obtained from this repository include 5 Stanford models and 2 XYZ RGB models. These include
This repository was published by Keenan Crane of Carnegie Mellon University under the CC0 1.0 Universal (CC0 1.0) Public Domain License.
diego were obtained from here.
teapot is Martin Newell’s Utah Teapot, and the remaining 24 models were all obtained for free from CGTrader with a Royalty Free License. A complete list of sources for each individual model can be found here.
The trained models that were used to evaluate the dataset to report benchmarking results are available for download here (480 MB). You can download this archive and use the
eval.sh bash script in the source code repository to reproduce the results reported in the paper yourself.
Alternatively, if you would like to train the networks from scratch, you can use the
train.py script in the same repository.
All code used for data generation and benchmarking as well as PyTorch data loaders for reading the dataset are available on GitHub.
This dataset was collected as part of a research project at German Research Center for Artificial Intelligence (DFKI).