Difference between LMEs and GLMMs?

  • A
  • Thread starter FallenApple
  • Start date
  • Tags
    Difference
In summary, the conversation discusses linear mixed models (LMEs) and Generalized Linear Mixed Effects Models (GLMMs). LMEs have both fixed and random coefficients, which are used to model between subject heterogeneity. GLMMs, on the other hand, only have random coefficients. It is believed that LMEs are a special case of GLMMs, with the main difference being the use of a potentially nonlinear link function in GLMMs. However, there is some debate about whether the first model in the conversation can be considered a GLMM, as it does not have any fixed effects. Cross-validation is suggested as a good resource for further discussion and input from experts.
  • #1
FallenApple
566
61
So I know that linear mixed models model has coefficients that are fixed and random. From what I understand, the fixed coefficients are still good since the random slopes/intercepts capture between subject heterogeneity. Which I suppose help estimate the crossectional effects better( i.e the fixed Betas)

Here is an example of LME

##Y_{i,j} \sim (\beta_{0}+b_{0,i} )+(\beta_{1}+b_{1,i})X_{i,j}+\epsilon_{i,j}##
Where ##(b_{0,i},b_{0,i})\underset{i,i,d}\sim N((0,0),G)##
##\epsilon_{i,j}\underset{i,i,d}\sim N(0,\sigma_{\epsilon})##

So I see here that the "coefficient"s consists of a random part and a fixed part

But I heard that LMEs are just special cases of Generalized Linear Mixed Effects Models(GLMMs). But isn't that a contradiction? GLMMs estimates are only for within subjects. Whereas for LMEs, I have population based estimates and within subject based ones.

Here is an example of GLMM

##link(\mu_{i,j}) \sim \beta_{0} +\beta_{1}X_{i,j}##

Where here appearantly, the betas are random.

So how can the lme be a subcase of glmm if there is no fixed component about which the individual can vary?
 
Last edited:
Physics news on Phys.org
  • #2
My understanding was that GLMMs are like GLMs except that they contain random as well as fixed effects. If so then, strictly speaking, that last one would not be a GLMM because it contains no fixed effects. But I suspect that practical use may be that GLMMs are understood to include all models that, via a link transformation, are linear combinations of random and/or fixed effects, in which case we would include the second example as a somewhat-degenerate GLMM.

I would have thought that, in the above, the first model can be expressed as an instance of the second by mapping ##(\beta_0+b_{0,i})+\epsilon_{i,j}## and ##(\beta_1+b_{1,i})## in the first model to ##\beta_0## and ##\beta_1## in the second.

Thus the remaining difference in structure between the two is that the second has a potentially non-linear link function, making the first a special case of the second, with identity link function.

Although I am very far from expert on this, I did a bit of modelling last year using function glmer from R package lme4, which implements GLMMs. The function is general enough that it can have all random effects, all mixed effects or a mixture of both, and can have a range of link functions including identity (giving a linear model).

If you want input from the real experts, a good place to get answers to statistical practice questions like this is Cross-Validated. This is distinct from Stack Overflow, where one asks questions about how to make specific programs like SAS, Stata or R do certain things.
 

What is the difference between LMEs and GLMMs?

LMEs (Linear Mixed Effects models) and GLMMs (Generalized Linear Mixed Models) are both statistical models used for the analysis of data that has both fixed and random effects. The main difference between them lies in the type of response variable they can handle. LMEs are used for continuous response variables, while GLMMs can handle both continuous and categorical response variables.

When should LMEs be used over GLMMs?

LMEs should be used when the response variable is continuous and normally distributed. This means that the data should have a bell-shaped curve when plotted on a graph. Additionally, LMEs are more appropriate when the data has a linear relationship between the response variable and the predictor variables.

What are the advantages of using GLMMs over LMEs?

GLMMs have the advantage of being able to handle both continuous and categorical response variables. This makes them more versatile and suitable for a wider range of data. Additionally, GLMMs can handle non-normally distributed data and do not require the data to have a linear relationship between the response and predictor variables.

Can LMEs and GLMMs be used for longitudinal data?

Yes, both LMEs and GLMMs can be used for longitudinal data, which is data that is collected over time from the same subjects. In fact, these models are commonly used for longitudinal data as they can account for the correlation between repeated measurements from the same subject.

Are there any software packages available for fitting LMEs and GLMMs?

Yes, there are several software packages available for fitting LMEs and GLMMs, such as R, SAS, and SPSS. These packages have built-in functions for fitting these models and can also handle complex data structures commonly encountered in research studies.

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
154
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
918
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
9K
Back
Top