H A: Controlling for all other predictors in the model, this predictor variable does explain variation in the outcome. H o: Controlling for all other predictors in the model, this predictor variable does not explain variation in the outcome. Hypotheses: Each predictor will have its own set of hypotheses: Using the Solver, logistic regression is no more difficult than any other to perform.
To interpret the estimate for Puppy, you would calculate e 0.85 = 2.34 and say that if a dog is a puppy, the odds of being adopted are 2.34 times higher than a dog that is not a puppy ( z = 3.15, p < 0.01). Using Excel and its built-in optimization tool called the Solver.
The model results might look something like this: For example, suppose your outcome variable is whether or not a dog gets adopted from an animal shelter and your predictor variable is whether or not the dog is a puppy. Logistic Model Equation (for k predictors):Įach coefficient estimate from a logistic regression is the natural log of the odds of a “success.” Typically, the estimates for each predictor are exponentiated and reported as odds ratios for ease of interpretation. Logistic regression is a type of generalized linear model, meaning that a link function (the logit) is applied to the outcome variable to estimate the effect each predictor variable has on the probability of “success” in the outcome variable. For example, if you are interested in whether or not a drug is effective at reducing nausea, patients who experienced a reduction in nausea would be coded as 1 and those who did not experience a reduction would be coded as 0. The category of interest (sometimes referred to as a “success”) is coded as 1 and the other category (“failure”) is coded as 0. The most common logistic regression method (covered here) is binary logistic regression, which is run on a dichotomous outcome variable. A logistic regression model can be run to determine if one or more predictors explain variation in a categorical outcome.