Jalal, MonaSpjut, JosefBoudaoud, BenBetke, Margrit2020-05-182020-05-182019-06-16Mona Jalal, Josef Spjut, Ben Boudaoud, M. Betke. 2019. "SIDOD: A Synthetic Image Dataset for 3D Object Pose Recognition with Distractors." IEEE Conference on Computer Vision and Pattern Recognition Workshops. Long Beach, CA, 2019-06-16 - 2019-06-20. https://doi.org/10.1109/CVPRW.2019.00063https://hdl.handle.net/2144/40959We present a new, publicly-available image dataset generated by the NVIDIA Deep Learning Data Synthesizer intended for use in object detection, pose estimation, and tracking applications. This dataset contains 144k stereo image pairs that synthetically combine 18 camera viewpoints of three photorealistic virtual environments with up to 10 objects (chosen randomly from the 21 object models of the YCB dataset ) and flying distractors. Object and camera pose, scene lighting, and quantity of objects and distractors were randomized. Each provided view includes RGB, depth, segmentation, and surface normal images, all pixel level. We describe our approach for domain randomization and provide insight into the decisions that produced the dataset.Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.Three-dimensional displaysPose estimationCamerasImage segmentationComputer visionTrainingLightingNVIDIA Deep Learning Data SynthesizerSIDODVirtual realityYCB datasetSIDOD: a synthetic image dataset for 3D object pose recognition with distractorsConference materials10.1109/CVPRW.2019.000630000-0002-4491-6868 (Betke, M)547475