Home
Uni-Logo
 

Datasets

Binaries/Code Datasets Open Source Software

Freiburg-Berkeley Motion Segmentation Dataset
Video Segmentation Benchmark
Image Sequences
TEM Dataset
TILDA Textile Texture Database
Training data for Exemplar CNN
Generated Matching Dataset
Training data for chair generation
Stereo Ego-motion Dataset
Optical Flow Datasets: "Flying Chairs", "ChairsSDHom"
Scene Flow Datasets
Human Part Segmentation Datasets  
Rendered Handpose Dataset
Pedestrian Zone Scene
FreiHAND Dataset


FreiHAND Dataset

In our recent publication we presented the challenging FreiHAND dataset, a dataset for hand pose and shape estimation from single color image, which can serve both as training and benchmarking dataset for deep learning algorithms. It contains 4*32560 = 130240 training and 3960 evaluation samples. Each training sample provides:
- RGB image (224x224 pixels)
- Hand segmentation mask (224x224 pixels)
- Intrinsic camera matrix K
- Hand scale (metric length of a reference bone)
- 3D keypoint annotation for 21 Hand Keypoints
- 3D shape annotation
The training set contains 32560 unique samples post processed in 4 different ways to remove the green screen background. Each evaluation sample provides an RGB image, Hand scale and intrinsic camera matrix. The keypoint and shape annotation is withhold and scoring of algorithms is handled through our Codalab evaluation server. For additional information please visit our project page.

Examples

RGB RGB + Keypoints RGB + Shape RGB RGB + Keypoints RGB + Shape
Image RGB Image RGB w keypoints Image RGB w shape Image RGB Image RGB w keypoints Image RGB w shape
Image RGB Image RGB w keypoints Image RGB w shape Image RGB Image RGB w keypoints Image RGB w shape
Image RGB Image RGB w keypoints Image RGB w shape Image RGB Image RGB w keypoints Image RGB w shape
Image RGB Image RGB w keypoints Image RGB w shape Image RGB Image RGB w keypoints Image RGB w shape



Terms of use

This dataset is provided for research purposes only and without any warranty. Any commercial use is prohibited. If you use the dataset or parts of it in your research, you must cite the respective paper.

@InProceedings{Freihand2019,
  author    = {Christian Zimmermann, Duygu Ceylan, Jimei Yang, Bryan Russell, Max Argus and Thomas Brox},
  title     = {FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape from Single RGB Images},
  booktitle = {IEEE International Conference on Computer Vision (ICCV)},
  year = {2019},
  url          = "https://lmb.informatik.uni-freiburg.de/projects/freihand/"
}



Dataset

For examples how to work with the dataset visit the accompanying Github repository.

Download FreiHAND Dataset v2 (3.7GB)



Contact

For questions about the dataset please contact Christian Zimmermann.