Applied statistical theory is a new series that will cover the basic methodology and framework behind various statistical procedures. As analysts, we need to know enough about what we’re doing to be dangerous and explain approaches to others. It’s not enough to say “I used X because the misclassification rate was low.” At the same time, we don’t need to have doctoral level understanding of approach X. I’m hoping that these posts will provide a simple, succinct middle ground for understanding various statistical techniques.

Probabilistic grphical models represent the conditional dependencies between random variables through a graph structure. Nodes correspond to random variables and edges represent statistical dependencies between the variables. Two variables are said to be conditionally dependent if they have a direct impact on each others’ values. Therefore, a graph with directed edges from parent `$latex A_p $`

and child `$latex B_c $`

denotes a causal relationship. Two variables are conditionally independent if the link between those variables are conditional on another. For a graph with directed edges from `$latex A $`

to `$latex B $`

and from `$latex B $`

to `$latex C $`

, it would suggest that `$latex A $`

and `$latex C $`

are conditionally independent given variable `$latex B $`

. Each node fits a probability distribution function that depends only on the value(s) of the variables with edges leading into the variable. For example, the probability distribution for variable `$latex C $`

in the following graphic depends only on the value of variable `$latex B$`

.

Let’s consider a graphical model with `$latex K = (k_1, k_2, ... , k_n) $`

variables and a set of dependencies between the variables, `$latex A = (a_1, a_2, ... , a_n) $`

. For each `$latex K $`

and `$latex A $`

, we denote a set of conditional probability distributions for each `$latex K $`

given the parent variable. In the following directed acyclic graph, we see that `$latex P(A|B,C) = P(A|B) $`

. This means that the probability of `$latex A $`

is conditionally dependent only on `$latex B $`

and the value of `$latex C $`

does not explain the other random variables. For belief networks, inference involves computing the probability of each value of a node in a network.

There you go; the absolute basics. And below is a presentation on belief networks that I made last year.

Pingback: Applied Statistical Theory: Belief Networks | Mubashir Qasim