Show simple item record

dc.contributor.authorJodoin, Pierre-Marcen_US
dc.contributor.authorSaligrama, Venkateshen_US
dc.contributor.authorKonrad, Januszen_US
dc.date.accessioned2019-11-13T18:03:37Z
dc.date.available2019-11-13T18:03:37Z
dc.date.issued2012-05-15
dc.identifier.citationPierre-Marc Jodoin, Venkatesh Saligrama, Janusz Konrad. 2012. "Behavior subtraction." IEEE Transactions on Image Processing, Volume 21, Issue 9, pp. 4244 - 4255. https://doi.org/10.1109/TIP.2012.2199326
dc.identifier.issn1057-7149
dc.identifier.urihttps://hdl.handle.net/2144/38500
dc.description.abstractBackground subtraction has been a driving engine for many computer vision and video analytics tasks. Although its many variants exist, they all share the underlying assumption that photometric scene properties are either static or exhibit temporal stationarity. While this works in many applications, the model fails when one is interested in discovering changes in scene dynamics instead of changes in scene's photometric properties; the detection of unusual pedestrian or motor traffic patterns are but two examples. We propose a new model and computational framework that assume the dynamics of a scene, not its photometry, to be stationary, i.e., a dynamic background serves as the reference for the dynamics of an observed scene. Central to our approach is the concept of an event, which we define as short-term scene dynamics captured over a time window at a specific spatial location in the camera field of view. Unlike in our earlier work, we compute events by time-aggregating vector object descriptors that can combine multiple features, such as object size, direction of movement, speed, etc. We characterize events probabilistically, but use low-memory, low-complexity surrogates in a practical implementation. Using these surrogates amounts to behavior subtraction, a new algorithm for effective and efficient temporal anomaly detection and localization. Behavior subtraction is resilient to spurious background motion, such as due to camera jitter, and is content-blind, i.e., it works equally well on humans, cars, animals, and other objects in both uncluttered and highly cluttered scenes. Clearly, treating video as a collection of events rather than colored pixels opens new possibilities for video analytics.en_US
dc.format.extent4244 - 4255en_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.ispartofIEEE Transactions on Image Processing
dc.subjectComputer science, artificial intelligenceen_US
dc.subjectEngineering, electrical & electronicen_US
dc.subjectComputer scienceen_US
dc.subjectEngineeringen_US
dc.subjectActivity analysisen_US
dc.subjectBehavior modelingen_US
dc.subjectUnusual behavior detectionen_US
dc.subjectVideo analysisen_US
dc.subjectVideo surveillanceen_US
dc.subjectArtificial intelligence & image processingen_US
dc.subjectArtificial intelligence and image processingen_US
dc.subjectElectrical and electronic engineeringen_US
dc.subjectCognitive sciencesen_US
dc.titleBehavior subtractionen_US
dc.typeArticleen_US
dc.description.versionAccepted manuscripten_US
dc.identifier.doi10.1109/TIP.2012.2199326
pubs.elements-sourcemanual-entryen_US
pubs.notesEmbargo: Not knownen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Engineeringen_US
pubs.organisational-groupBoston University, College of Engineering, Department of Electrical & Computer Engineeringen_US
pubs.publication-statusPublisheden_US
dc.identifier.orcid0000-0002-0675-2268 (Saligrama, Venkatesh)
dc.identifier.mycv30401


This item appears in the following Collection(s)

Show simple item record