Show simple item record

dc.contributor.authorFortenberry, Breten_US
dc.date.accessioned2018-12-04T15:14:04Z
dc.date.issued2012
dc.date.submitted2012
dc.identifier.otherb38096936
dc.identifier.urihttps://hdl.handle.net/2144/32882
dc.descriptionThesis (Ph.D.)--Boston University, 2012en_US
dc.descriptionPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.en_US
dc.description.abstractEffective navigation depends upon reliable estimates of head direction (HD). Visual, vestibular, and outflow motor signals combine for this purpose in a brain system that includes dorsal tegmental nucleus, lateral mammillary nuclei (LMN), anterior dorsal thalamic nucleus (ADN), and the postsubiculum (PoS). Learning is needed to combine such different cues and to provide reliable estimates of HD. A neural model is developed to explain how these three types of signals combine adaptively within the above brain regions to generate a consistent and reliable HD estimate, in both light and darkness. The model starts with establishing HD cells so that each cell is tuned to a preferred head direction, wherein the firing rate is maximal at the preferred direction and decreases as the head turns from the preferred direction. In the brain, HD cells fire in anticipation of a head rotation. This anticipation is measured by the anticipated time interval (ATI), which is greater in early processing stages of the HD system than at later stages. The ATI is greatest in the LMN at -70 ms, it is reduced in the ADN to -25 ms, and non-existing in the last HD stage, the PoS. In the model, these HD estimates are controlled at the corresponding processing stages by combinations of vestibular and motor signals as they become adaptively calibrated to produce a correct HD estimate. The model also simulates how visual cues anchor HD estimates through adaptive learning when the cue is in the animal's field of view. Such learning gains control over cell firing within minutes. As in the data, distal visual cues are more effective than proximal cues for anchoring the preferred direction. The introduction of novel cues in either a novel or familiar environment is learned and gains control over a cell's preferred direction within minutes. Turning out the lights or removing all familiar cues does not change the cells firing activity, but it may accumulate a drift in the cell's preferred direction.en_US
dc.language.isoen_US
dc.publisherBoston Universityen_US
dc.subjectCognitive and neural systemsen_US
dc.titleA neural model of head direction calibration during spatial navigation: learned integration of visual, vestibular, and motor cuesen_US
dc.typeThesis/Dissertationen_US
dc.description.embargo2031-01-01
etd.degree.nameDoctor of Philosophyen_US
etd.degree.leveldoctoralen_US
etd.degree.disciplineNeurologyen_US
etd.degree.grantorBoston Universityen_US
dc.identifier.barcode11719032084651
dc.identifier.mmsid99174972290001161


This item appears in the following Collection(s)

Show simple item record