Generating Synthetic RGB-D Datasets for Texture-less Surfaces Reconstruction

Samples from the synthetic dataset. The dataset contains RGB-D data of several common 3D objects rendered as images in Blender with no textures and different lights. Depth and normal maps are provided.


The state-of-the-art approaches for monocular 3D reconstruction mainly focus on datasets with highly textured images. Most of these methods are trained on datasets like ShapeNet which render well-textured objects. However, in natural scenes, many objects are texture-less, making it difficult to reconstruct them. Unlike textured surfaces, reconstruction of texture-less surfaces has not received as much attention mainly because of a lack of large-scale annotated datasets. Some recent works have also focused on texture-less surfaces, many of which are trained on a small real-world dataset containing 26k images of 5 different texture-less clothing items. To facilitate further research in this direction, we present an extensible dataset generation framework for texture-less RGB-D data. We also make available a large dataset containing 364k images with corresponding groundtruth depth maps and surface normal maps. In addition to clothing items, our dataset includes images of other everyday objects, including animals, furniture, statues, vehicles, and other miscellaneous items. There are 2635 unique 3D models and 48 different objects in total, including 13 main objects from the ShapeNet dataset. Our framework also allows automatic generation of more data from any 3D model, including those obtained directly from the ShapeNet repository. This dataset will aid future research in reconstructing texture-less objects for a wide range of object categories.

Get Full-Text Downloads

Category Objects # of Objects
animals asian_dargon, bunny, cats, dragon, duck, pig 6
clothing cape, dress, hoodie, jacket, shirt, suit, tracksuit, tshirt 8
furniture armchair, bed, chair, rocking_chair, sofa, table 6
misc diego, kettle, plants, skeleton, teapot 5
statues armadillo, buddha, lucy, roman, thai_statue 5
vehicles bicycle, car, jeep, ship, spaceship 5
shapenet plane, bench, cabinet, car, chair display lamp speaker rifle sofa table phone watercraft 13 x 200

Data Sources

These models were obtained from several sources in the public domain, as listed in the following subsections.

The Stanford 3D Scanning Repository

Models obtained from this repository include 5 Stanford models and 2 XYZ RGB models. These include bunny, dragon, buddha, armadillo, lucy, asian_dargon, and thai.

Keenan’s 3D Model Repository

This repository was published by Keenan Crane of Carnegie Mellon University under the CC0 1.0 Universal (CC0 1.0) Public Domain License. duck, pig, skeleton and diego were obtained from here.

Other Sources

The teapot is Martin Newell’s Utah Teapot, and the remaining 24 models were all obtained for free from CGTrader with a Royalty Free License. A complete list of sources for each individual model can be found here.

Data Viewer

Image Depth Map Normal Map



The trained models that were used to evaluate the dataset to report benchmarking results are available for download here (480 MB). You can download this archive and use the bash script in the source code repository to reproduce the results reported in the paper yourself.

Alternatively, if you would like to train the networks from scratch, you can use the script in the same repository.

Source Code

All code used for data generation and benchmarking as well as PyTorch data loaders for reading the dataset are available on GitHub.


Our datasets are available under the CC BY 4.0 license (read summary). The source code is provided with the MIT License.


This dataset was collected as part of a research project at German Research Center for Artificial Intelligence (DFKI).