Discovering Statistics Using IBM SPSS Statistics: North American Edition
Rate it:
78%
Flag icon
In a scientific context, parsimony refers to the idea that simpler explanations of a phenomenon are preferable to complex ones.
78%
Flag icon
Linearity: In the linear model we assume that the outcome has linear relationships with the predictors.
78%
Flag icon
Independence of errors: In logistic regression, violating this assumption produces overdispersion,
78%
Flag icon
Usually this is revealed by implausibly large standard errors. Two situations can provoke this situation, both of which are related to the ratio of cases to variables: incomplete information and complete separation.
78%
Flag icon
Conscientious researchers produce and check multi-way crosstabulations of all categorical independent variables.
78%
Flag icon
second situation in which logistic regression collapses might surprise you: it’s when the outcome variable can be perfectly predicted by one variable or a combination of variables. This situation is known as complete separation.
78%
Flag icon
Complete separation often arises when too many variables are fitted to too few cases.
78%
Flag icon
Logistic regression is not only used to predict a two category outcome (coded 0 and 1), it can also be used to, for example, predict proportions or outcomes with several categories
79%
Flag icon
Wald statistic indicates that having the intervention (or not) is a significant predictor of whether the patient was cured because the p-value is 0.002, which is less than the conventional threshold of 0.05.
80%
Flag icon
If the model perfectly fits the data, then this histogram should show all the cases for which the event has occurred on the right-hand side, and all the cases for which the event hasn’t occurred on the left-hand side.
80%
Flag icon
If the predictor is a continuous variable, the cases will be spread across many columns.
80%
Flag icon
the more the cases cluster at each end of the graph, the better; such a plot would show that when the outcome did occur (i.e., the patient was cured) the predicted probability of the event occurring is also high (i.e., close to 1).
80%
Flag icon
This situation represents a model that correctly predicts the observed outcome data.
80%
Flag icon
a good model will ensure that few cases are misclassified;
80%
Flag icon
Fitting a model without checking how well it fits the data is like buying a new pair of trousers without trying them on:
80%
Flag icon
a model does its job regardless of the data, but the real-life value of the model may be limited. So, our conclusions so far are fine in themselves, but to be sure that the model is a good one, it is important to examine the residuals.
80%
Flag icon
As a bare minimum, report the b-values (and their standard errors and significance value), the odds ratio (and its confidence interval) and some general statistics about the model (such as the R2 and goodness-of-fit statistics).
80%
Flag icon
SPSS does not produce collinearity diagnostics in logistic regression (which creates the illusion that multicollinearity doesn’t matter).
81%
Flag icon
If you want to predict membership of more than two categories, the logistic regression model extends to multinomial logistic regression.
81%
Flag icon
The model breaks the outcome variable into a series of comparisons between two categories
81%
Flag icon
Pseudo R-square: This option produces the Cox and Snell and Nagelkerke R2 statistics, which can be used as effect sizes.
81%
Flag icon
Step summary: This option produces a table that summarizes the predictors entered or removed at each step.
81%
Flag icon
Model fitting information: This option produces a table that compares the model (or models in a stepwise analysis) to the baseline (the model with only the intercept in it). This table can be useful to compare whether the model has improved (fro...
This highlight has been truncated due to consecutive passage length restrictions.
81%
Flag icon
Information Criteria: This option produces Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC), whic...
This highlight has been truncated due to consecutive passage length restrictions.
81%
Flag icon
Cell probabilities: This option produces a table of the observed and expected frequencies, which is basically the same as the classification table produced in binary logistic regression and is probably worth inspecting.
81%
Flag icon
Classification table: This option produces a contingency table of observed versus predicted responses for all combinations of predictor variables.
81%
Flag icon
Goodness-of-fit: This option is important because it produces Pearson and likelihood ratio chi-square statistics for the model.
81%
Flag icon
Monotonicity measures: This option is worth selecting only if your outcome variable has two outcomes
81%
Flag icon
Estimates: This option produces the b-values, test statistics and confidence intervals for predictors in the model and is essential.
81%
Flag icon
Likelihood ratio tests: The model overall is tested using likelihood ratio statistics, but this option will compute the same test for individual effects in the model.
81%
Flag icon
Asymptotic correlations and Asymptotic covariances: This option produces a table of correlations (or covariances)...
This highlight has been truncated due to consecutive passage length restrictions.
81%
Flag icon
Logistic regression works through an iterative process
81%
Flag icon
Remember that the log-likelihood is a measure of how much unexplained variability there is in the outcome and the change in the log-likelihood indicates how much new variance has been explained by a model relative to an earlier model.
1 12 14 Next »