Show simple item record

dc.contributor.authorTezcan, M Ozanen_US
dc.contributor.authorIshwar, Prakashen_US
dc.contributor.authorKonrad, Januszen_US
dc.coverage.spatialSnowmass, COen_US
dc.date2019-12-10
dc.date.accessioned2020-05-14T14:56:13Z
dc.date.available2020-05-14T14:56:13Z
dc.date.issued2020-03-01
dc.identifier.citationM Ozan Tezcan, Prakash Ishwar, Janusz Konrad. 2020. "BSUV-Net: A fully-convolutional neural network for background subtraction of unseen videos." IEEE Winter Conference on Applications of Computer Vision. Snowmass, CO. 1 March 2020.
dc.identifier.urihttps://hdl.handle.net/2144/40842
dc.description.abstractBackground subtraction is a basic task in computer vision and video processing often applied as a pre-processing step for object tracking, people recognition, etc. Recently, a number of successful background-subtraction algorithms have been proposed, however nearly all of the top-performing ones are supervised. Crucially, their success relies upon the availability of some annotated frames of the test video during training. Consequently, their performance on completely “unseen” videos is undocumented in the literature. In this work, we propose a new, supervised, background subtraction algorithm for unseen videos (BSUV-Net) based on a fully-convolutional neural network. The input to our network consists of the current frame and two background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we also introduce a new data-augmentation technique which mitigates the impact of illumination difference between the background frames and the current frame. On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of several metrics including F-measure, recall and precision.en_US
dc.titleBSUV-Net: a fully-convolutional neural network for background subtraction of unseen videosen_US
dc.typeConference materialsen_US
dc.description.versionAccepted manuscripten_US
pubs.elements-sourcemanual-entryen_US
pubs.notesEmbargo: No embargoen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Engineeringen_US
pubs.organisational-groupBoston University, College of Engineering, Department of Electrical & Computer Engineeringen_US
pubs.publication-statusAccepteden_US
dc.identifier.mycv542316


This item appears in the following Collection(s)

Show simple item record