Monday 6th January, 13:00-14:00. Administrative
matters. Introductory example: random walk in a maze. Distribution
of the room number at time t is tricky, but the distribution of room
at time t+1 conditioned on the room at time t is simple. Definition
of stochastic process, discrete-time and continuous-time. Aims of the
module. First half of the module is concerned with discrete-time
processes. Definition of Markov chain. The Markov property.
(Time-)homogeneous Markov chains (these are the ones we study in the
module); transition probabilities.
Monday 6th January, 17:00–18:00.
Lemma 1.1: the transition
probabilities determine the probability of observing any given
sequence of states. Transition graphs, a way of visualising Markov
chains. Various examples of Markov chains with transition graphs,
including: maze, random walk on Z, gambler's ruin, rainfall model and
number of heads in a sequence of coin tosses. A non-example: random
walk in a maze with immediate returns prohibited.
Tuesday 7th January, 10:00-11:00. Extra lecture.
Transition matrices,
examples, as before. The r-step transition probabilities, Theorem 1.2
(Chapman-Kolmogorov relations).
Corollary 1.3: The r-step transition
probabilities are given by the r'th power of the transition matrix.
Proof of Corollary 1.3 by induction on r. Example: gambler's ruin.
Thursday 9th January, 15:00-16:00. [Exercise Sheet 1 distributed.]
If P is diagonalisable, then the r'th power (and hence the r-step transition
probabilities) is easier to compute. Absorbing states: a way of
modelling termination. First-step analysis. The idea is to condition
on the firt step and apply the Law of Total Probability. Typical questions
answered: what is the probability of absorption in a particular state?
what is the expected time to absorption? An example of the former
kind, with one non-absorbing state. A not-quite-so-easy example
with two non-absorbing states.