Chapter 14: Probabilistic Reasoning over Time — Exercises
Chapter 14: Probabilistic Reasoning over Time¶
Russell & Norvig, Artificial Intelligence: A Modern Approach (4th ed.), end-of-chapter Exercise prompts — full text is included on this site (same material as the exercises/ch14/ sources in the aima monorepo).
Exercise 14.1¶
We have a bag of three biased coins , , and with probabilities
of coming up heads of 20%, 60%, and 80%, respectively. One coin is drawn
randomly from the bag (with equal likelihood of drawing each of the
three coins), and then the coin is flipped three times to generate the
outcomes , , and .
Draw the Bayesian network corresponding to this setup and define the necessary CPTs.
Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.
Exercise 14.2¶
We have a bag of three biased coins , , and with probabilities
of coming up heads of 30%, 60%, and 75%, respectively. One coin is drawn
randomly from the bag (with equal likelihood of drawing each of the
three coins), and then the coin is flipped three times to generate the
outcomes , , and .
Draw the Bayesian network corresponding to this setup and define the necessary CPTs.
Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.
Exercise 14.3¶
Equation (parameter
Consider a simple network with three Boolean variables. Use Equations (conditional
-probability -equation and (marginalization -equation (pages conditional -probability -equation and marginalization -equation) to express the conditional probability as the ratio of two sums, each over entries in the joint distribution . Now use Equation (parameter
-joint -repn -equation to write this expression in terms of the network parameters , , and . Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters satisfy the constraint , show that the resulting expression reduces to .
Generalize this derivation to show that for any Bayesian network.
Exercise 14.4¶
The arc reversal operation of in a Bayesian network allows us to change the direction
of an arc while preserving the joint probability
distribution that the network represents Shachter:1986. Arc reversal
may require introducing new arcs: all the parents of also become
parents of , and all parents of also become parents of .
Assume that and start with and parents, respectively, and that all variables have values. By calculating the change in size for the CPTs of and , show that the total number of parameters in the network cannot decrease during arc reversal. (Hint: the parents of and need not be disjoint.)
Under what circumstances can the total number remain constant?
Let the parents of be and the parents of be , where and are disjoint. The formulas for the new CPTs after arc reversal are as follows:
Prove that the new network expresses the same joint distribution over all variables as the original network.
Exercise 14.5¶
Consider the Bayesian network in Figure
If no evidence is observed, are and independent? Prove this from the numerical semantics and from the topological semantics.
If we observe , are and independent? Justify your answer by calculating whether the probabilities involved satisfy the definition of conditional independence.
Exercise 14.6¶
Suppose that in a Bayesian network containing an unobserved variable
, all the variables in the Markov blanket have been
observed.
Prove that removing the node from the network will not affect the posterior distribution for any other unobserved variable in the network.
Discuss whether we can remove if we are planning to use (i) rejection sampling and (ii) likelihood weighting.
Three possible structures for a Bayesian network describing genetic inheritance of handedness.
Exercise 14.7¶
Let be a random variable denoting the
handedness of an individual , with possible values or . A
common hypothesis is that left- or right-handedness is inherited by a
simple mechanism; that is, perhaps there is a gene , also with
values or , and perhaps actual handedness turns out mostly the
same (with some probability ) as the gene an individual possesses.
Furthermore, perhaps the gene itself is equally likely to be inherited
from either of an individual’s parents, with a small nonzero probability
of a random mutation flipping the handedness.
Which of the three networks in Figure handedness-figure claim that ?
Which of the three networks make independence claims that are consistent with the hypothesis about the inheritance of handedness?
Which of the three networks is the best description of the hypothesis?
Write down the CPT for the node in network (a), in terms of and .
Suppose that . In network (a), derive an expression for in terms of and only, by conditioning on its parent nodes.
Under conditions of genetic equilibrium, we expect the distribution of genes to be the same across generations. Use this to calculate the value of , and, given what you know about handedness in humans, explain why the hypothesis described at the beginning of this question must be wrong.
Exercise 14.8¶
The Markov blanket of a variable is defined on page markov-blanket-page.
Prove that a variable is independent of all other variables in the
network, given its Markov blanket and derive
Equation (markov
A Bayesian network describing some features of a car's electrical system and engine. Each variable is Boolean, and the true value indicates that the corresponding aspect of the vehicle is in working order.
Exercise 14.9¶
Consider the network for car diagnosis shown in Figure
.
Extend the network with the Boolean variables and .
Give reasonable conditional probability tables for all the nodes.
How many independent values are contained in the joint probability distribution for eight Boolean nodes, assuming that no conditional independence relations are known to hold among them?
How many independent probability values do your network tables contain?
The conditional distribution for could be described as a noisy-AND distribution. Define this family in general and relate it to the noisy-OR distribution.
Exercise 14.10¶
Consider a simple Bayesian network with root variables ,
, and and child variable , with a
noisy-OR conditional distribution for as described in
Section canonical
Exercise 14.11¶
Consider the family of linear Gaussian networks, as defined on page
.
In a two-variable network, let be the parent of , let have a Gaussian prior, and let be a linear Gaussian distribution. Show that the joint distribution is a multivariate Gaussian, and calculate its covariance matrix.
Prove by induction that the joint distribution for a general linear Gaussian network on is also a multivariate Gaussian.
Exercise 14.12¶
The probit distribution defined on
page probit-page describes the probability distribution for a Boolean
child, given a single continuous parent.
How might the definition be extended to cover multiple continuous parents?
How might it be extended to handle a multivalued child variable? Consider both cases where the child’s values are ordered (as in selecting a gear while driving, depending on speed, slope, desired acceleration, etc.) and cases where they are unordered (as in selecting bus, train, or car to get to work). (Hint: Consider ways to divide the possible values into two sets, to mimic a Boolean variable.)
Exercise 14.13¶
In your local nuclear power station, there is an alarm that senses when
a temperature gauge exceeds a given threshold. The gauge measures the
temperature of the core. Consider the Boolean variables (alarm
sounds), (alarm is faulty), and (gauge is faulty) and the
multivalued nodes (gauge reading) and (actual core temperature).
Draw a Bayesian network for this domain, given that the gauge is more likely to fail when the core temperature gets too high.
Is your network a polytree? Why or why not?
Suppose there are just two possible actual and measured temperatures, normal and high; the probability that the gauge gives the correct temperature is when it is working, but when it is faulty. Give the conditional probability table associated with .
Suppose the alarm works correctly unless it is faulty, in which case it never sounds. Give the conditional probability table associated with .
Suppose the alarm and gauge are working and the alarm sounds. Calculate an expression for the probability that the temperature of the core is too high, in terms of the various conditional probabilities in the network.
Exercise 14.14¶
Two astronomers in different parts of the world
make measurements and of the number of stars in some
small region of the sky, using their telescopes. Normally, there is a
small possibility of error by up to one star in each direction. Each
telescope can also (with a much smaller probability ) be badly out of
focus (events and ), in which case the scientist will
undercount by three or more stars (or if is less than 3, fail to
detect any stars at all). Consider the three networks shown in
Figure telescope-nets-figure.
Which of these Bayesian networks are correct (but not necessarily efficient) representations of the preceding information?
Which is the best network? Explain.
Write out a conditional distribution for , for the case where and . Each entry in the conditional distribution should be expressed as a function of the parameters and/or .
Suppose and . What are the possible numbers of stars if you assume no prior constraint on the values of ?
What is the most likely number of stars, given these observations? Explain how to compute this, or if it is not possible to compute, explain what additional information is needed and how it would affect the result.
Exercise 14.15¶
Consider the network shown in
Figure telescope-nets-figure(ii), and assume that the
two telescopes work identically. and
, with the symbolic CPTs as described
in Exercise telescope-exercise. Using the enumeration
algorithm (Figure enumeration
Three possible networks for the telescope problem.
Exercise 14.16¶
Consider the Bayes net shown in Figure
.
Which of the following are asserted by the network structure?
.
.
.
Calculate the value of .
Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.
A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?
Suppose we want to add the variable to the network; draw the new network and briefly explain any links you add.
A simple Bayes net with Boolean variables B = {BrokeElectionLaw}, I = {Indicted}, M = {PoliticallyMotivatedProsecutor}, G= {FoundGuilty}, J = {Jailed}.
Exercise 14.17¶
Consider the Bayes net shown in Figure
.
Which of the following are asserted by the network structure?
.
.
.
Calculate the value of .
Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.
A context-specific independence (see page CSI-page) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in Figure politics-figure?
Suppose we want to add the variable to the network; draw the new network and briefly explain any links you add.
Exercise 14.18¶
Consider the variable elimination algorithm in
Figure elimination
Section exact
-inference -section applies variable elimination to the query Perform the calculations indicated and check that the answer is correct.
Count the number of arithmetic operations performed, and compare it with the number performed by the enumeration algorithm.
Suppose a network has the form of a chain: a sequence of Boolean variables where for . What is the complexity of computing using enumeration? Using variable elimination?
Prove that the complexity of running variable elimination on a polytree network is linear in the size of the tree for any variable ordering consistent with the network structure.
Exercise 14.19¶
Investigate the complexity of exact inference
in general Bayesian networks:
Prove that any 3-SAT problem can be reduced to exact inference in a Bayesian network constructed to represent the particular problem and hence that exact inference is NP-hard. (Hint: Consider a network with one variable for each proposition symbol, one for each clause, and one for the conjunction of clauses.)
The problem of counting the number of satisfying assignments for a 3-SAT problem is #P-complete. Show that exact inference is at least as hard as this.
Exercise 14.20¶
Consider the problem of generating a
random sample from a specified distribution on a single variable. Assume
you have a random number generator that returns a random number
uniformly distributed between 0 and 1.
Let be a discrete variable with for . The cumulative distribution of gives the probability that for each possible . (See also Appendix [math-appendix].) Explain how to calculate the cumulative distribution in time and how to generate a single sample of from it. Can the latter be done in less than time?
Now suppose we want to generate samples of , where . Explain how to do this with an expected run time per sample that is constant (i.e., independent of ).
Now consider a continuous-valued variable with a parameterized distribution (e.g., Gaussian). How can samples be generated from such a distribution?
Suppose you want to query a continuous-valued variable and you are using a sampling algorithm such as LIKELIHOODWEIGHTING to do the inference. How would you have to modify the query-answering process?
Exercise 14.21¶
Consider the query
in Figure rain
How many states does the Markov chain have?
Calculate the transition matrix containing for all , .
What does , the square of the transition matrix, represent?
What about as ?
Explain how to do probabilistic inference in Bayesian networks, assuming that is available. Is this a practical way to do inference?
Exercise 14.22¶
This exercise explores the stationary
distribution for Gibbs sampling methods.
The convex composition of and is a transition probability distribution that first chooses one of and with probabilities and , respectively, and then applies whichever is chosen. Prove that if and are in detailed balance with , then their convex composition is also in detailed balance with . (Note: this result justifies a variant of GIBBS-ASK in which variables are chosen at random rather than sampled in a fixed sequence.)
Prove that if each of and has as its stationary distribution, then the sequential composition also has as its stationary distribution.
Exercise 14.23¶
The Metropolis--Hastings algorithm is a member of the MCMC family; as such, it is designed to generate samples (eventually) according to target probabilities . (Typically we are interested in sampling from .) Like simulated annealing, Metropolis–Hastings operates in two stages. First, it samples a new state from a proposal distribution , given the current state . Then, it probabilistically accepts or rejects according to the acceptance probability
If the proposal is rejected, the state remains at .
Consider an ordinary Gibbs sampling step for a specific variable . Show that this step, considered as a proposal, is guaranteed to be accepted by Metropolis–Hastings. (Hence, Gibbs sampling is a special case of Metropolis–Hastings.)
Show that the two-step process above, viewed as a transition probability distribution, is in detailed balance with .
Exercise 14.24¶
Three soccer teams , , and , play each
other once. Each match is between two teams, and can be won, drawn, or
lost. Each team has a fixed, unknown degree of quality—an integer
ranging from 0 to 3—and the outcome of a match depends probabilistically
on the difference in quality between the two teams.
Construct a relational probability model to describe this domain, and suggest numerical values for all the necessary probability distributions.
Construct the equivalent Bayesian network for the three matches.
Suppose that in the first two matches beats and draws with . Using an exact inference algorithm of your choice, compute the posterior distribution for the outcome of the third match.
Suppose there are teams in the league and we have the results for all but the last match. How does the complexity of predicting the last game vary with ?
Investigate the application of MCMC to this problem. How quickly does it converge in practice and how well does it scale?