dc.contributor.author Stevens, Roger T. en_US dc.date.accessioned 2017-11-01T13:13:40Z dc.date.available 2017-11-01T13:13:40Z dc.date.issued 1959 dc.date.submitted 1959 dc.identifier.other b14816830 dc.identifier.uri https://hdl.handle.net/2144/24603 dc.description Thesis (M.A.)--Boston University en_US dc.description.abstract Probability problems in which a time parameter is involved are known as stochastic processes. The simplest time dependent stochastic processes are those in which the probabilities of a system changing to various states are solely dependent upon the present state of the system. These processes are known as Markov processes, or for the case where only discrete time intervals are considered, as Markov chains. A Markov chain may be completely defined by the matrix of its transition probabilities. This matrix is called a stochastic matrix and is characterized by the facts that it is a square matrix, that the elements of each column sum to one and that all the elements are non-negative. An important consideration in most Markov chain problems is the effect of a number of transitions as defined by the stochastic matrix. Performing this operation requires determining the higher powers of the stochastic matrix. Two modal matrices are defined, where k is the matrix of the column characteristic vectors of the stochastic matrix and K is the matrix of the row characteristic vectors. It is shown that with proper normalization of these vectors, the stochastic matrix P is equal to kAK, where A is the matrix of the characteristic roots along the diagonal and zeroes elsewhere. .The higher powers of the stochastic matrix, Pm, are then found to be equal to kAmk. The stochastic matrix is found always to have a characteristic root one, and all the other roots are shown to be less than one in absolute value. The limiting transition matrix P ∞ is found to have identical columns, each consisting of the characteristic column vector associated with the characteristic root one. The limiting distribution is the same vector and is independent of the initial conditions.[TRUNCATED] en_US dc.language.iso en_US dc.publisher Boston University en_US dc.rights Based on investigation of the BU Libraries' staff, this work is free of known copyright restrictions. en_US dc.subject Markov chains en_US dc.title An application of Markov chains en_US dc.type Thesis/Dissertation en_US etd.degree.name Master of Arts en_US etd.degree.level masters en_US etd.degree.discipline Mathematics en_US etd.degree.grantor Boston University en_US
﻿