Generalized linear models (GLMs) extend linear regression to models with a non-Gaussian, or even discrete, response. GLM theory is predicated on the exponential family of distributions--a class so rich that it includes the commonly used logit, probit, and Poisson models. Although one can fit these models in Stata by using specialized commands (for example, logit for logit models), fitting them as GLMs with Stata's glm command offers some advantages. For example, model diagnostics may be calculated and interpreted similarly regardless of the assumed distribution.
This text thoroughly covers GLMs, both theoretically and computationally, with an emphasis on Stata. The theory consists of showing how the various GLMs are special cases of the exponential family, showing general properties of this family of distributions, and showing the derivation of maximum likelihood (ML) estimators and standard errors. Hardin and Hilbe show how iteratively reweighted least squares, another method of parameter estimation, are a consequence of ML estimation using Fisher scoring.
Why are stats specialists usually incapable at explaining their knowledge to human beings? If you're going to explain something rather statistically simple, it doesn't help to start it off with a solid page of Greek characters. The fact that I knew how to explain the stuff I did know in simpler (and more concise) terms made me much less willing to try to slog through the stuff I didn't know.