Author: Victor Hugo Ayma Quirita
Original title: Collaborative Face Tracking: A Framework for Long-Term Face Tracking
Visual tracking is fundamental in several computer vision applications. In particular, face tracking is challenging because of the variations in facial appearance, due to age, ethnicity, gender, facial hair, and cosmetics, as well as appearance variations inf long video sequences caused by facial deformations, lighting conditions, abrupt movements, and occlusions. Generally, trackers are robust to some of these factors but do not achieve satisfactory results when dealing with multiple of them at the same time. An alternative is to combine the results of different trackers to achieve more robust outcomes. This work fits into this context and proposes a new method for scalable, robust, accurate tracker fusion able to combine trackers regardless of their models. The method further provides the integration of face detectors into the fusion model to increase the tracking accuracy. The proposed method was implemented for validation purposes and was tested in different configurations that combined up to five different trackers and one face detector. In tests on four video sequences that present different imaging conditions the method outperformed the trackers used individually or combined.