Aim of RGBD 2017
The advent of low cost RGB-D sensors such as Microsoft’s Kinect or Asus’s Xtion Pro is completely
changing the computer vision world, as they are being successfully used in several applications and research areas.
Many of these applications, such as gaming or human computer interaction systems, rely on the efficiency of learning
a scene background model for detecting and tracking moving objects, to be further processed and analyzed. Depth data
is particularly attractive and suitable for applications based on moving objects detection, since they are not affected
by several problems typical of color based imagery. However, depth data suffer from other type of problems, such as
depth-camouflage or depth sensor noisy measurements, which bound the efficiency of depth-only based background modeling approaches.
The complementary nature of color and depth synchronized information acquired with RGB-D sensors poses new challenges
and design opportunities. New strategies are required that explore the effectiveness of the combination of depth- and
color-based features, or their joint incorporation into well known moving object detection and tracking frameworks.
The aim of the Workshop is to bring together researchers interested in background learning for detection and tracking from RGBD videos, in order to:
- disseminate their most recent research results,
- advocate and promote the research into scene background modeling and initialization,
- discuss rigorously and systematically potential solutions and challenges,
- promote new collaborations among researchers working in different application areas,
- share innovative ideas and solutions for exploiting the potential synergies emerging from the integration of different application domains.
Relevant topics concerning background learning for detection and tracking from RGBD videos include but are not limited to:
- New or revisited approaches, models, methods and algorithms
- Benchmark datasets
- Performance evaluation
- Applications
Prospective authors may conduct and report results of quantitative evaluation of their methods on the
SBM-RGBD dataset
(updated on July 13th, 2017) and participate to the SBM-RGBD challenge.