Using R to make sense of the generalized linear model

Statistical models

Modern science rests on several foundational pillars. Among these is the ability to construct sufficiently solid theoretical abstractions that are able to explain concrete observable aspects of the world. Once the applicability range of a theoretical model has been defined, it is usually compared to another model that is similair in its goal (i.e., another model that aims to explain similar aspects of the world). However, we rarely directly compare theoretical models. Instead, we are brought to compare statistical models that aim to represent theories.

According to Rouder, Morey, & Wagenmakers (2016), “Models are devices that connect theories to data. A model is an instanciation of a theory as a set of probabilistic statements”. One common and convenient example of such instanciation is the linear model, which –in its general form– allows to predict parameter(s) of a distribution, which is supposed to reflect the distribution from which the observed data is issued (the data generation process).

But what does it mean for a model to predict something ? In the current post, I focus on four R functions (the predict, fitted, residuals and simulate functions), exploring the similarities and differences between these functions, to illustrate the mechanisms and assumptions of the generalized linear model.

The usual linear model is of the following form.

\[ \begin{aligned} y_{i} &\sim \mathrm{Normal}(\mu_{i}, \sigma) \\ \mu_{i} &= \alpha + \beta \cdot \text{x}_{i} \\ \end{aligned} \]

Where, in Bayesian terms, the first line of the model corresponds to the likelihood of the model, which is the assumption made about the data generation process. We make the assumption that the outcomes \(y_{i}\) are normally distributed around a mean \(\mu_{i}\) with some error \(\sigma\). This is equivalent to say that the errors are normally distributed around \(0\).

Of course, the distributionnal assumption is not restricted to be Gaussian, and can be adapted to whatever distribution that makes sense in consideration of the data at hand. The linear aspect of the linear model actually refers to the second line of the above description, in which one tries to predict parameters of the distribution (e.g., \(\mu_{i}\) or \(\sigma\)) by a linear combination of some predictor variable(s). Generalising to other distributions, the generalized linear model can be rewritten as:

\[ \begin{aligned} y_{i} &\sim \mathrm{D}(f(\eta_{i}), \theta) \\ \eta &= \mathbf{X} \beta \\ \end{aligned} \]

Where the response \(y_{i}\) is predicted through the linear combination \(\eta\) of predictors transformed by the inverse link function \(f\), assuming a certain distribution \(D\) for \(y\) (also called the family), and family-specific parameters \(\theta\) (Bürkner, 2017).

Below, we illustrate a simple Gaussian linear model using the Howell1 dataset from the rethinking package (McElreath, 2016), which contains data about 544 individuals, including height (centimeters), weight (kilograms), age (years) and gender (0 indicating female and 1 indicating male).

library(rethinking)
library(tidyverse)
library(ggExtra)

data(Howell1)

d <-
    Howell1 %>%
    filter(age >= 18)

p <- 
    d %>%
    ggplot(aes(x = weight, y = height) ) +
    geom_point(size = 2, alpha = 0.3) +
    geom_smooth(method = "lm", colour = "black") +
    theme_bw(base_size = 12)

ggMarginal(p, type = "histogram")

A quick visual exploration of the dataset reveals a positive relationship between height and weight. The above plotted regression line corresponds to the following model, where we assume a normal likelihood:

\[ \begin{aligned} \text{height}_{i} &\sim \mathrm{Normal}(\mu_{i}, \sigma) \\ \mu_{i} &= \alpha + \beta \cdot \text{weight}_{i} \\ \end{aligned} \]

This model can be fitted easily in R with the following syntax.

(mod1 <- lm(height ~ weight, data = d) )
## 
## Call:
## lm(formula = height ~ weight, data = d)
## 
## Coefficients:
## (Intercept)       weight  
##     113.879        0.905

The intercept (113.879) represents the predicted height when weight is at 0 (which makes no much sense in our case), while the slope (0.905) represents the change in height when weight increases by one unit (i.e., one kilogram).

Prediction is key: predict and fitted

The main advantage of the previous model is that it allows to make predictions for any value of \(\text{weight}\). In R, this is done using the aptly named predict function. For instance, we can ask our model what is the expected height for an individual of weight 43, which is equal to \(\alpha + \beta \cdot 43\).

wght <- 43

d %>%
    ggplot(aes(x = weight, y = height) ) +
    geom_line(aes(y = predict(mod1) ), size = 1) +
    geom_point(size = 2, alpha = 0.3) +
    geom_segment(
        x = wght, xend = wght,
        y = 0, yend = predict(mod1, newdata = data.frame(weight = wght) ),
        linetype = 2) +
    geom_segment(
        x = 0, xend = wght,
        y = predict(mod1, newdata = data.frame(weight = wght) ),
        yend = predict(mod1, newdata = data.frame(weight = wght) ),
        linetype = 2) +
    theme_bw(base_size = 12)

Implementing the function predict by hand is quite easy and will allow us to better understand how it works. This function is actually simply retrieving parameters of the fitted model (in our case, the intercept and the slope) to make predictions about the outcome variable, given some values of the predictor(s). In other words, it corresponds to the second line of our model.

d <- 
    d %>%
    mutate(
        pred_mod1 = predict(mod1),
        pred_mod1_2 = coef(mod1)[1] + coef(mod1)[2] * weight
        )

head(d)
##    height   weight age male pred_mod1 pred_mod1_2
## 1 151.765 47.82561  63    1  157.1630    157.1630
## 2 139.700 36.48581  63    0  146.9001    146.9001
## 3 136.525 31.86484  65    0  142.7180    142.7180
## 4 156.845 53.04191  41    1  161.8839    161.8839
## 5 145.415 41.27687  51    0  151.2362    151.2362
## 6 163.830 62.99259  35    1  170.8895    170.8895

We could also be interested in predicting the height of individuals with other weights that the weights we observed (e.g., weights between 80 and 100 kgs). Below we simulate new data from our model (i.e., we simulate heights) and predictions for this new set of data (i.e., the \(\mu_{i}\)).

# generating weights from 80 to 100 kg
data.frame(weight = 80:100) %>%
    # retrieving mod1 predictions
    mutate(pred = predict(mod1, newdata = .) ) %>%
    # simulating data from our model, taking into account sigma
    mutate(sim =  pred + rnorm(length(weight), 0, sd(residuals(mod1) ) ) ) %>%
    # plotting these predictions
    ggplot(aes(x = weight, y = pred) ) +
    geom_line(size = 1) +
    geom_point(
        aes(x = weight, y = sim),
        size = 2, alpha = 0.9) +
    geom_segment(
        aes(
            xend = weight,
            yend = sim
            ),
        size = 0.5, alpha = 0.5, lineend = "round") +
    theme_bw(base_size = 12)

Where the vertical lines represent deviations from the predicted values. OK, so we’ve seen the the predict function simply uses the linear model to make predictions about the \(\mu_{i}\).

You might know that there exists a similar function, the fitted function, which allows to extract fitted values of a model.

d <- 
    d %>%
    mutate(fitted_mod1 = fitted(mod1) )

head(d)
##    height   weight age male pred_mod1 pred_mod1_2 fitted_mod1
## 1 151.765 47.82561  63    1  157.1630    157.1630    157.1630
## 2 139.700 36.48581  63    0  146.9001    146.9001    146.9001
## 3 136.525 31.86484  65    0  142.7180    142.7180    142.7180
## 4 156.845 53.04191  41    1  161.8839    161.8839    161.8839
## 5 145.415 41.27687  51    0  151.2362    151.2362    151.2362
## 6 163.830 62.99259  35    1  170.8895    170.8895    170.8895

Surprisingly, the predict and fitted functions seem to do the exact same thing (at least their results are the same)…but do they ? To answer this question, let’s ask another one.

Errors and residuals: the residuals funtion

A very common fallacy about the assumptions of the linear (Gaussian) model is that the outcome variable should be normally distributed. Instead, this assumption concerns the distribution of the outcome variable around its predicted value (i.e., the distribution of the errors).

What we actually said above is that the errors \(\epsilon_{i}\) should be normally distributed around the predicted value. But the errors are the the non-observed (and non-observable) differences between the theoretical predicted value \(\mu\) and the observed outcomes. Consequenty, we do not have access to it. Instead, what we can work with are the residuals \(e_{i}\), wich can be seen as an estimate (from the sample) of the errors \(\epsilon_{i}\), in a similar way as \(b\) is an estimate of \(\beta\). To sum up, the residuals are the \(e_{i} = y_{i} - \bar{X}\) while the errors are the \(\epsilon_{i} = y_{i} - \mu_{i}\).

In other words, errors pertain to the data generation process, whereas residuals are the difference between the model’s etimation and the observed outcomes. Basically, the residuals are the difference between the observed value and the predicted value. We can obtain them easily using the residuals function (which is useful for more complex models), or by substracting to each observed outcome \(y_{i}\) the predicted \(\mu_{i}\).

d <- 
    d %>%
    mutate(
        res1 = residuals(mod1),
        res2 = height - pred_mod1
        )

d %>%
    select(height, weight, male, pred_mod1, res1, res2) %>%
    head
##    height   weight male pred_mod1      res1      res2
## 1 151.765 47.82561    1  157.1630 -5.397960 -5.397960
## 2 139.700 36.48581    0  146.9001 -7.200111 -7.200111
## 3 136.525 31.86484    0  142.7180 -6.193000 -6.193000
## 4 156.845 53.04191    1  161.8839 -5.038870 -5.038870
## 5 145.415 41.27687    0  151.2362 -5.821164 -5.821164
## 6 163.830 62.99259    1  170.8895 -7.059520 -7.059520

Below we plot these residuals, and make the alpha (i.e., the transparency) and the size of the points dependent on the distance to the predicted value (so that larger residuals appear as bigger and less transparent). This distance is also represented by the length of the vertical lines.

d %>%
    sample_frac(.5) %>% # selecting a subsample of the data
    ggplot(aes(x = weight, y = height) ) +
    geom_line(aes(y = pred_mod1), size = 1) +
    geom_point(aes(alpha = abs(res1), size = abs(res1) ) ) +
    guides(alpha = FALSE, size = FALSE) +
    geom_segment(aes(xend = weight, yend = pred_mod1, alpha = abs(res1) ) ) +
    theme_bw(base_size = 12)

If we take all the above verticals bars (i.e., the residuals) and plot their distribution, we can compare them to a normal distribution with mean 0, and standard deviation \(\sigma\) equal to the standard deviation of the residuals, to check our assumption about the distribution of the residuals (here a normality assumption).

d %>%
    ggplot(aes(x = res1) ) +
    geom_histogram(aes(y = ..density..), bins = 20, alpha = 0.6) +
    geom_line(aes(y = dnorm(res1, mean = 0, sd = sd(res1) ) ), size = 1) +
    guides(fill = FALSE) +
    theme_bw(base_size = 12)

To sum up, the distributional assumption of the linear model (Gaussian or whatever) concerns the distribution of the errors, that we do not know. Instead, we can evaluate the distribution of the residuals, which are the observed differences between the outcome variable and the predictions of the model.

Prediction uncertainty: the simulate function

Recall that our model is:

\[ \begin{aligned} \text{height}_{i} &\sim \mathrm{Normal}(\mu_{i}, \sigma) \\ \mu_{i} &= \alpha + \beta \cdot \text{weight}_{i} \\ \end{aligned} \]

So far, we used the linear part of the linear model (i.e., the second line) to make predictions about the \(\mu_{i}\), from \(\alpha\) and \(\beta\). In other words, we made predictions about mean values of the Gaussian distribution for a specific value of weight.

Now, if we want to generate actual data (i.e., the \(y_{i}\)), from our model, we still have to include \(\sigma\). This can be done “by-hand”, by adding error (i.e., some random variation) to our predictions.

d <- 
    d %>%
    mutate(
        sim1 = coef(mod1)[1] + coef(mod1)[2] * d$weight + rnorm(1, 0, sd(residuals(mod1) ) )
        )

d %>%
    select(height, weight, male, pred_mod1, sim1) %>%
    head
##    height   weight male pred_mod1     sim1
## 1 151.765 47.82561    1  157.1630 151.4238
## 2 139.700 36.48581    0  146.9001 141.1610
## 3 136.525 31.86484    0  142.7180 136.9789
## 4 156.845 53.04191    1  161.8839 156.1448
## 5 145.415 41.27687    0  151.2362 145.4970
## 6 163.830 62.99259    1  170.8895 165.1504

The simulate function allows to automate the previous step and to run it nsims times.

nsims <- 1e4
sims <- simulate(mod1, nsim = nsims) %>% data.frame

sims[1:6, 1:6]
##      sim_1    sim_2    sim_3    sim_4    sim_5    sim_6
## 1 152.3493 153.5202 162.2734 149.3439 156.0287 151.2812
## 2 152.6136 146.6698 154.2646 156.4498 147.2208 149.9570
## 3 142.7372 141.3283 139.8621 146.1231 147.2462 140.1070
## 4 172.3288 166.0919 161.4925 161.4784 162.9176 160.0321
## 5 151.4443 153.6922 147.7561 151.4084 146.7019 159.2652
## 6 167.7441 168.0395 168.0080 174.1996 173.9836 172.7726
lower_ci_sim <- apply(sims, 1, function(x) quantile(x, probs = 0.025) )
upper_ci_sim <- apply(sims, 1, function(x) quantile(x, probs = 0.975) )

sims_summary <-
    data.frame(
        lower = lower_ci_sim,
        upper = upper_ci_sim
        )

d %>%
    ggplot(aes(x = weight, y = height) ) +
    geom_smooth(method = "lm", color = "black") +
    geom_point(size = 2, alpha = 0.3) +
    geom_ribbon(
        data = sims_summary, inherit.aes = FALSE,
        aes(x = d$weight, ymin = lower, ymax = upper), alpha = 0.1) +
    theme_bw(base_size = 12)

The first shaded region around the regression line represents the 95% confidence interval around the regression line, while the second broader shaded area represents the 95% central quantiles of the gaussian distribution at each value of weight. These are computed by simulating nsims height values at each weight value, and by taking the 95% mean values of this distibution.

Conclusions

In this post, we tried to shed light on some concepts of the generalized linear model by examining the similarities and differences between four common R functions. It should be acknowledged that these functions (predict, fitted, residuals, and simulate) work with many different types of models, including models fitted with lme4 or brms. For instance, here is an overview of the available methods in my current session.

methods(predict)
##  [1] predict.ar*                predict.Arima*            
##  [3] predict.arima0*            predict.glm               
##  [5] predict.glmmPQL*           predict.gls*              
##  [7] predict.gnls*              predict.HoltWinters*      
##  [9] predict.lda*               predict.lm                
## [11] predict.lme*               predict.lmList*           
## [13] predict.loess*             predict.lqs*              
## [15] predict.mca*               predict.mlm*              
## [17] predict.nlme*              predict.nls*              
## [19] predict.polr*              predict.poly*             
## [21] predict.ppr*               predict.prcomp*           
## [23] predict.princomp*          predict.psych*            
## [25] predict.qda*               predict.rlm*              
## [27] predict.smooth.spline*     predict.smooth.spline.fit*
## [29] predict.StructTS*         
## see '?methods' for accessing help and source code

References

Click to expand

Bürkner, P.-C. (2017). brms: An R package for bayesian multilevel models using Stan. Journal of Statistical Software, 80 (1), 1–28. doi:10.18637/jss.v080.i01

McElreath, R. (2016). Statistical Rethinking (p. 469). Chapman; Hall/CRC.

Rouder, J. N., Morey, R. D., & Wagenmakers, E.-J. (2016). The Interplay between Subjectivity, Statistical Practice, and Psychological Science. Collabra, 2(1), 1–12. doi:10.1525/collabra.28

Notes


  1. For more details on the logit model, please check this previous post.

Related

comments powered by Disqus