linear

3. Logistic regression: or what is the probability of success?

Previous topics 1. Introduction to statistics 2. Linear regression Why do we need logistic regression for predictions for studying how things influence other things Since logistic regression can easily handle both numerical and categorical predictors, it gives you the power to check literally anything on it’s ability to increase or decrease the probability (or the odds) of success. Providing probabilities makes a logistic regression one of the most useful statistical tools for understanding the world.

2. Linear regression vs. Statistical Tests ⚔ who wins?

Previous topics Introduction to statistics Why do we need linear regression for predictions for studying how things influence other things Regression is a line which tries to be as close as possible to all data points simultaneously and in this way describes your data using only two numbers, intercept and slope. If there is a relationship between two variables, then you can predict one of those variables by knowing only the value of the other.

1. Introduction to statistics: The (small) Big Picture or how to solve 95% of statistical problems

Why do we need statistics? to learn about the world to do science to develop artificial intelligence The bad news is - statistics is un-intuitive, boring and hard to understand, otherwise, you’d already know everything. But the good new is - you don’t need to understand it. You just need to know how to use statistics to get the most out of your data. Think about driving a car for a moment.

Mixed Effects Models 4: logistic regression and more

Previous topics or when do we need it Why do we need it? What are the benefits? Effects soup: fixed, random, nested, crossed How to compute Generalized Linear Mixed Effects Models in R How to conduct mixed-effects logistic regression Visualize the data How do we describe random effects in a model? Testing significance of random and fixed effects Post-hoc with emmeans package Methods to fit (estimate) GLMMs When NOT to use Mixed Effects Logistic regression Which R packages (functions) fit GLMMs?

Mixed Effects Models 2: Crossed vs. Nested Random Effects

Previous topics Why do we need it? What are the benefits? What’s the difference? Nested random effects Crossed random effects Crossed and Nested at the same time Summary Conclusions What’s next Further readings and references Previous topics Repeated Measures ANOVA Mixed Effects Models 1: Random Intercept Why do we need it? What are the benefits? If we don’t account for repeated measurements, we’ll finish up with a big sample size due to pseudoreplication and will most likely get significant results which won’t make any sense.

Mixed Effects Models 1: Random Intercept

Previous topics Why do we need it? What are the benefits? How do Mixed Effects Models work Fixed and random effects Fixed or random effects? The golden rule is > 5 MEM in R Why and how do we compare models? Why not just use everything as a fixed effects? Visualize model results How to report the results Multiple MEM: adding another predictor Explanatory vs. predictive power of the model Interaction Post-hoc / Contrast Analysis Pick up the final best model A final interpretation of our best model When NOT to use Mixed Effects Models Assumptions What’s next Further readings and references Previous topics Repeated Measures ANOVA Multiple linear regression Why do we need it?

Mixed Effects Models 3: Random Slopes

Previous topics or when do we need it Why do we need it? What are the benefits? How to compute Random Slopes Mixed Effects Models in R Simple Random Slope model Multiple Random Slope model On how to select and compare models The golden rule How to report results How to visualize a Random Slope Model Visualize predictions of MEM Check all the predictors Post-Hos analysis Choose the final (best) model When NOT to use Mixed Effects Models Assumptions What’s next Further readings and references Previous topics or when do we need it To keep this post short, I’ll skip lots of explanations which were made in the previous posts.

Statistical tests vs. linear regression

Previous topics Head-to-head comparison: One-sample t-test vs. linear regression Paired sample t-test vs. linear regression Unpaired sample t-test vs. linear regression One-way ANOVA vs. linear regression Two-way ANOVA vs. linear regression One-way ANCOVA vs. linear regression Any-way MANOVA vs. linear regression Any-way MANCOVA vs. linear regression Do we need both? What are the pros? Conclusion: which one is better? What’s next Further readings and references This post was inspired by this article

Model diagnostics

The interpretation and understanding of the model is both important and difficult. In this post we demistify the summary and plot of the simple linear model. Think about them as a concentrated useful information which tells the story about your data. Summary model <- lm(mpg ~ hp, mtcars) summary(model) ## ## Call: ## lm(formula = mpg ~ hp, data = mtcars) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.

Multiple linear regression

Nature and life are complex phenomena, and explaining them by a single variable is simply impossible, that is why simple linear regression needs to be complemented with several predictors. But, sinse we often have a lot of predictors, we often conduct a variable selection. Backward selection cannot be used if p > n, while forward selection can always be used. Forward selection is a greedy approach, and might include variables early that later become redundant.