Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
262 views
in Technique[技术] by (71.8m points)

r - How do I compare 2 lme4 mixed models without using AIC?

I am trying to model select manually using null hypothesis testing (for various reasons I don't want to use AIC in this case). I have used the lme4 package to construct my models and the global model looks like this (data names changed);

global<- lmer(Shannon ~
                    + AN:Var1
                  + AN:Var2
                  + AN:Var3
                  + AN:Var4
                  + Var1 + Var2 + Var3
                  + Var4 + Var5 + Var6 + Var7 + (1|Random),
                  data = data, REML=FALSE)

I want to drop a variable out in turn and compare to the global using an anova() test but it throws up various errors, what am I doing wrong?

I've already found the top models using AIC, however some recent critisism of AIC which I won't go into here means that in this case I just want to strip it back. I tried a simple anova like this:

anova(globalsessilebase, model1) 

(models structured like the original post, model 1 has var1 dropped out) which results in this:

                   npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)
model1              14 437.55 488.83 -204.78   409.55                     
globalsessilebase   15 438.94 493.89 -204.47   408.94 0.6101  1     0.4348

which is fine as far as I know, but some for some of the models (there are 11 when each variable is dropped out seqyentially) chisq is 0, which I don't really understand.

I also just tried drop1 and that just gives me the AIC values?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I believe this doesn't have anything to do with mixed models, but is a consequence of the way that interactions with categorical variables work. If you have a categorical variable f and a numeric variable x, and a model that contains both the interaction f:x and the main effect x, then dropping x from the model doesn't actually do anything (it's not impossible to have an interaction without the corresponding main effect, but it violates the "principle of marginality" and R makes it difficult to do). Here's an example with lm() ...

dd <- data.frame(z=rnorm(20),f=factor(sample(1:3,size=20,replace=TRUE)),x=rnorm(20))
m1 <- lm(z~f:x + x,dd)
m0 <- update(m1, . ~ . - x)
anova(m0,m1)

Results:

Analysis of Variance Table

Model 1: z ~ f:x
Model 2: z ~ f:x + x
  Res.Df    RSS Df   Sum of Sq F Pr(>F)
1     16 11.591                        
2     16 11.591  0 -1.7764e-15         

You can see that the residual degrees of freedom (Res.Df) and residual sums of squares (RSS) are identical, the difference in degrees of freedom is 0, and the sum-of-squares difference is essentially 0 (not exactly because of numeric inaccuracy); the p-value is missing. The format from anova() for lme4 output will be slightly different, but the concept is the same.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...