An application of Markov chains
Stevens, Roger T.
MetadataShow full item record
Probability problems in which a time parameter is involved are known as stochastic processes. The simplest time dependent stochastic processes are those in which the probabilities of a system changing to various states are solely dependent upon the present state of the system. These processes are known as Markov processes, or for the case where only discrete time intervals are considered, as Markov chains. A Markov chain may be completely defined by the matrix of its transition probabilities. This matrix is called a stochastic matrix and is characterized by the facts that it is a square matrix, that the elements of each column sum to one and that all the elements are non-negative. An important consideration in most Markov chain problems is the effect of a number of transitions as defined by the stochastic matrix. Performing this operation requires determining the higher powers of the stochastic matrix. Two modal matrices are defined, where k is the matrix of the column characteristic vectors of the stochastic matrix and K is the matrix of the row characteristic vectors. It is shown that with proper normalization of these vectors, the stochastic matrix P is equal to kAK, where A is the matrix of the characteristic roots along the diagonal and zeroes elsewhere. .The higher powers of the stochastic matrix, Pm, are then found to be equal to kAmk. The stochastic matrix is found always to have a characteristic root one, and all the other roots are shown to be less than one in absolute value. The limiting transition matrix P ∞ is found to have identical columns, each consisting of the characteristic column vector associated with the characteristic root one. The limiting distribution is the same vector and is independent of the initial conditions.[TRUNCATED]
Thesis (M.A.)--Boston University