Show simple item record

dc.contributor.authorAblavsky, Vitalyen_US
dc.date.accessioned2015-06-24T19:48:31Z
dc.date.available2015-06-24T19:48:31Z
dc.date.issued2011-03-16
dc.identifier.citationAblavsky, Vitaly. "Layered Graphical Models for Tracking Partially-Occluded Moving Objects in Video (PhD Thesis)", Technical Report BUCS-TR-2011-010, Computer Science Department, Boston University, March 16, 2011. [Available from: http://hdl.handle.net/2144/11367]
dc.identifier.urihttps://hdl.handle.net/2144/11367
dc.description.abstractTracking multiple targets using fixed cameras with non-overlapping views is a challenging problem. One of the challenges is predicting and tracking through occlusions caused by other targets or by fixed objects in the scene. Considerable effort has been devoted toward developing appearance models that are robust to partial occlusions, tracking algorithms that cope with short-term loss of observations, and algorithms that learn static occlusion maps. In this thesis we consider scenarios where it is impossible to learn a static occlusion map. This is often the case when the scene consists of both people and large objects whose position is not permanently fixed. These objects may enter, leave or relocate within the scene during a short time span. We call such objects "relocatable objects" or "relocatable occluders." We develop a representation for scenes containing relocatable objects that can cause partial occlusions of people in a camera's field of view. In many practical applications, relocatable objects tend to appear often; therefore, models for them can be learned off-line and stored in a database. We formulate an occluder-centric representation, called a graphical model layer, where a person's motion in the ground plane is defined as a first-order Markov process on activity zones, while image evidence is aggregated in 2D observation regions that are depth-ordered with respect to the occlusion mask of the relocatable object. We represent real-world scenes as a composition of depth-ordered, interacting graphical model layers, and account for image evidence in a way that handles mutual overlap of the observation regions and their occlusions by the relocatable objects. These layers interact: proximate ground plane zones of different model instances are linked to allow a person to move between the layers, and image evidence is shared between the observation regions of these models. We demonstrate our formulation in tracking low-resolution, partially-occluded pedestrians in the vicinity of parked vehicles. In these scenarios some tracking formulations that rely on part-based person detectors may fail completely. Our pedestrian tracker fares well and compares favorably with the state-of-the-art pedestrian detectors---lowering false positives by twenty-nine percent and false negatives by forty-two percent---and a deformable-contour--based tracker.en_US
dc.language.isoen_US
dc.publisherComputer Science Department, Boston Universityen_US
dc.relation.ispartofseriesBUCS Technical Reports;BUCS-TR-2011-010
dc.titleLayered graphical models for tracking partially-occluded moving objects in video (PhD thesis)en_US
dc.typeTechnical Reporten_US
dc.typeThesis/Dissertationen_US
etd.degree.nameDoctor of Philosophyen_US
etd.degree.leveldoctoralen_US
etd.degree.disciplineComputer Scienceen_US
etd.degree.grantorBoston Universityen_US


This item appears in the following Collection(s)

Show simple item record