Discovering Statistics Using IBM SPSS Statistics: North American Edition
Rate it:
37%
Flag icon
Estimates: This option is selected by default because it gives us the estimated b-values for the model as well as the associated t-test and p-value (see Section 9.2.5). Confidence intervals: This option produces confidence intervals for each b-value in the model. Remember that if the model assumptions are not met these confidence intervals will be inaccurate and bootstrap confidence intervals should be used instead. Covariance matrix: This option produces a matrix of the covariances, correlation coefficients and variances between the b-values for each variable in the model. A ...more
38%
Flag icon
Part and partial correlations: This option produces the zero-order correlation (the Pearson correlation) between each predictor and the outcome variable. It also produces the semi-partial (part) and partial correlation between each predictor and the outcome (see Sections 8.5 and 9.9.1). Collinearity diagnostics: This option produces collinearity statistics such as the VIF, tolerance, eigenvalues of the scaled, uncenterd cross-products matrix, condition indexes and variance proportions (see Section 9.9.3). Durbin-Watson: This option produces the Durbin–Watson test statistic, which tests the ...more
This highlight has been truncated due to consecutive passage length restrictions.
38%
Flag icon
The left-hand side lists several variables: DEPENDNT: the outcome variable. *ZPRED: the standardized predicted values of the outcome based on the model. These values are standardized forms of the values predicted by the model. *ZRESID: the standardized residuals or errors. These values are the standardized differences between the observed values of the outcome and those predicted by the model. *DRESID: the deleted residuals described in Section 9.3.2. *ADJPRED: the adjusted predicted values described in Section 9.3.2. *SRESID: the studentized residual described in Section 9.3.1. *SDRESID: the ...more
38%
Flag icon
Produce all partial plots will produce scatterplots of the residuals of the outcome variable and each of the predictors when both variables are regressed separately on the remaining predictors.
38%
Flag icon
We can use the correlation matrix to get a sense of the relationships between predictors and the outcome, and for a preliminary look for multicollinearity. If there is no multicollinearity in the data then there should be no substantial correlations (r > 0.9) between predictors.
38%
Flag icon
The standardized versions of the b-values are sometimes easier to interpret (because they are not dependent on the units of measurement of the variables). The standardized beta values (in the column labeled Beta, βi) tell us the number of standard deviations that the outcome changes when the predictor changes by one standard deviation.
40%
Flag icon
The Bayesian estimate of b can be found in the columns labeled Posterior Mode and Posterior Mean.
40%
Flag icon
If your model has several predictors than you can’t really beat a summary table as a concise way to report your model. As a bare minimum report the betas along with their standard errors and confidence interval
40%
Flag icon
Experimental research, for example, takes advantage of the fact that if we systematically manipulate what happens to people we can make causal inferences about the effects of those manipulations.
40%
Flag icon
The simplest form of experiment is one in which we split the sample into an experimental group and a control group that is identical to the experimental group in all respects except the one expected to have an impact on the outcome
40%
Flag icon
Systematic manipulation of the independent (predictor) variable is a powerful tool because it goes one step beyond merely observing variables.
41%
Flag icon
When we enter nominal variables into SPSS it doesn’t matter what numbers we choose, because SPSS converts them into sensible values behind the scenes. But the numbers we choose to represent our categories in a mathematical model are important: they change the meaning of the resulting b-values.
41%
Flag icon
There are different ‘standard’ ways to code variables (which we won’t get into here), one of which is to use dummy variables.
41%
Flag icon
Historically, people think about comparing two means as a separate test, and SPSS keeps this historical convention with its menu structure.
41%
Flag icon
Independent t-test: This test is used when you want to compare two means that come from conditions consisting of different entities (this is sometimes called the independent-measures or independent-means t-test). Paired-samples t-test: This test, also known as the dependent t-test, is used when you want to compare two means that come from conditions consisting of the same or related entities
41%
Flag icon
Two samples of data are collected and the sample means calculated. These means might differ by either a little or a lot.
41%
Flag icon
If the samples come from the same population, then we expect their means to be roughly equal
41%
Flag icon
expect means from two random samples to be very similar. We compare the difference between the sample means that we collected to the difference between the sample means that we would expect to obtain (in the long run) if there were no effect (i.e., if the null hypothesis were true).
41%
Flag icon
Most test statistics are a signal-to-noise ratio: the ‘variance explained by the model’ divided by the ‘variance that the model can’t explain’
41%
Flag icon
effect divided by error.
41%
Flag icon
When comparing two means the ‘model’ that we fit (the effect) is the difference bet...
This highlight has been truncated due to consecutive passage length restrictions.
41%
Flag icon
Means vary from sample to sample (sampling variation), and we can use the standard error as a measur...
This highlight has been truncated due to consecutive passage length restrictions.
41%
Flag icon
Therefore, we can use the standard error of the differences between the two means as an estimate of the error in our model
41%
Flag icon
This equation compares the mean difference between our samples () to the difference that we would expect to find between population means (µD), relative to the standard error of the differences ().
41%
Flag icon
If we plotted the frequency distribution of differences between means of pairs of samples we’d get the sampling distribution of differences between means.
41%
Flag icon
The standard deviation of this sampling distribution is called the standard error of differences.
41%
Flag icon
to), a small standard error suggests that the difference between means of most pairs of samples will be very close to the population mean (in this case 0 if the null is true) and that substantial differences are very rare.
41%
Flag icon
A large standard error tells us that the difference between means of most pairs of samples can be quite variable: although the difference between means of most pairs of samples will still be centerd around zero, substantial differences from zero are more common
41%
Flag icon
the standard error is a good indicator of the size of the difference between sample means that we can e...
This highlight has been truncated due to consecutive passage length restrictions.
41%
Flag icon
conditions under which scores are collected are not stable,
41%
Flag icon
The standard error helps us to gauge this by giving us a scale of likely variability between samples.
41%
Flag icon
the standard error of differences provides a scale of measurement for how plausible it is that an observed difference between sample means could be the product of taking two random samples from the same population.
41%
Flag icon
is the mean of these difference scores.
41%
Flag icon
represents the effect size,
41%
Flag icon
we place this effect size within the context of what’s plausible for random samples by dividing by the standard error of differences.
41%
Flag icon
The standard error of differences () is likewise estimated from the standard deviation of differences within the sample (sD) divided by the square root of the sample size (N).
41%
Flag icon
t is a signal-to-noise ratio or the systematic variance compared to the unsystematic variance.
41%
Flag icon
If the experimental manipulation creates difference between conditions, then we would expect the effect (the signal) to be greater than the unsystematic variation (the noise) and, at the very least, t will be greater than 1.
41%
Flag icon
If the exact p-value for t is below the predetermined alpha value (usually 0.05), scientists take this to support the conclusion that the differences between scores are not due to sampling variation and that their manipulation (e.g., dressing up a robot as a human) has had a significant effect.
41%
Flag icon
When we want to compare scores that are independent (e.g., different entities have been tested in the different conditions of your experiment) we are in the same logical territory as when scores are related.
41%
Flag icon
When scores in two groups come from different participants, pairs of scores will differ not only because of the experimental manipulation reflected by those conditions, but also because of other sources of variance
41%
Flag icon
The difference between sample means is compared to the difference we would expect to get between the means of the two populations from which the samples come (µ1 − µ2): (10.8)
41%
Flag icon
under the null hypothesis µ1 = µ2, which means that µ1 − µ2 = 0, and so µ1 − µ2 drops out of the equation leaving us with: (10.9)
41%
Flag icon
The sampling distribution would tell us by how much we can expect the means of two (or more) samples to differ (if the null were true).
41%
Flag icon
If the standard error is large then large differences between sample means can be expected; if it is small then only small differences between sample means are typical.
41%
Flag icon
It is, therefore, straightforward to estimate the standard error for the sampling distribution of each population by using the standard deviation (s) and size (N) for each sample: (10.10)
41%
Flag icon
Having converted to variances, we can take advantage of the variance sum law, which states that the variance of a difference between two independent variables is equal to the sum of their variances
42%
Flag icon
Both the independent t-test and the paired-samples t-test are parametric tests and as such are prone to the sources of bias
42%
Flag icon
The first table (Output 10.3) provides summary statistics for the two experimental conditions
42%
Flag icon
The value of the t-statistic is the same but has a positive sign rather than negative.
1 10 14