Show simple item record

dc.contributor.authorHardie, Charles Hen_US
dc.date.accessioned2016-04-07T15:25:01Z
dc.date.available2016-04-07T15:25:01Z
dc.date.issued1961
dc.date.submitted1961
dc.identifier.otherb1456242x
dc.identifier.urihttps://hdl.handle.net/2144/15538
dc.descriptionThesis (M.A.)--Boston University N.B.: Page 3 of Abstract is incorrectly labeled as Page 2. No content is missing from thesis.en_US
dc.description.abstractMany processes or systems can be fully or partially described by using that part of the probability theory called "stochastic processes." The term stochastic process is usually used when a change of state of a process may occur with time. A Markov process is a stochastic process where the probability of the process being in some state in the future depends only on the state which the process is presently in. A Markov chain is a Markov process which may occupy only a finite number of a denumerably infinite number of states. [TRUNCATED]en_US
dc.language.isoen_US
dc.publisherBoston Universityen_US
dc.rightsBased on investigation of the BU Libraries' staff, this work is free of known copyright restrictions.en_US
dc.titleFinite Markov chains and recurrent events.en_US
dc.typeThesis/Dissertationen_US
etd.degree.nameMaster of Artsen_US
etd.degree.levelmastersen_US
etd.degree.disciplineMathematicsen_US
etd.degree.grantorBoston Universityen_US


This item appears in the following Collection(s)

Show simple item record