Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
91 Cards in this Set
- Front
- Back
How do we determine causality: if one event causes another event?
|
Association, Temporal Priority, Control of common-causal variables
|
|
Association
|
way to determine if there is a causal relationship
-is there a correlation between an independent and dependent variable? |
|
Temporal Priority
|
If event A occurs before event B, then A might be causing B.
If event A occurs after event B then A cannot be causing B |
|
causal statements
|
require ruling out the influence of common-causal variables that may produce the same relationship between the variables
|
|
experimental manipulations
|
the process through which the researcher rules out the possibility that the relationship between the independent and dependent variables is false/fake
|
|
One-way experimental design
|
Has one IV that is manipulated
|
|
Levels
|
refers to the specific situations that are created within a IV manipulation
-in one way design they are called the experimental condistions |
|
equivalence
|
helps to eliminate the influence of common-casual variables
|
|
How do you create equivalence in your experimental design?
|
1. between-participant designs
2. repeated-measures designs |
|
Between-participants designs
|
-create equivalence
-have different but equivalent participants in each level of the experiement |
|
Repeated-measures designs
|
-create equivalence
-have same people in each of the experimental conditions aka. "within-subjects" design |
|
Random assignment to conditions
|
-most common way to create equivalence
-level of the IV each will experience is determined randomnly |
|
Experimental Condition
|
Level of the IV in which the situation of interest was created
|
|
Control Condition
|
Level of the IV in which the situation was not created
|
|
limitations of designs with 2 levels
|
1. Difficulty telling which of the two levels is causing a change in the dependent measure
2. Difficulty drawing conclusions about the pattern of the relationship where the manipulation varies the strength of the IV |
|
Detecting Curvilinear Relationships
|
An experimental design with only two levels cannot detect curvilinear relationships
|
|
Curvilinear Relationships
|
increases in the IV cause increases in the DV at some points but cause decreases at other points
|
|
Analysis of Variance (ANOVA)
|
-compares the means of dependent variables across the levels of an experimental research design
-analyzes the variability of the dependent variable -means equivalent --> no difference except chance -if the manipulation has influenced the DV there will be significantly more variability among them |
|
One-way analysis of variance (ANOVA)
|
used to compare the means on a DV between two or more groups of participants who differ on a single IV
|
|
Between-groups variance
|
in ANOVA, a measure of the variability of the DV across the conditions
|
|
Within-group variance
|
in ANOVA, a measure of the variability of the DV within the conditions
|
|
F statistic Definiton
|
in the ANOVA a statistic that assesses the extent to which the means of the experimental conditions differ more than would be expected by chance
|
|
Calculating the F-statistic
|
calculated as the ratio of the two variances
|
|
between-group and within-group variance estimates
|
between-group variance is significantly greater than within group variance
|
|
Factorial Designs
|
most experimental research designs include more than one IV
|
|
Factor
|
the manipulated IV
|
|
Level
|
each condition within a particular IV
|
|
two-way factorial experimental design
|
-two manipulated factors
-each level of one IV occurs with each level of the other IV -research hypoth makes a very specific prediction about the pattern of means expected to be observed on the DV |
|
Schematic Diagram of a Factorial Design
|
Greater than (>) and less than (<) signs show the expected relative values of the means
|
|
Marginal Means
|
When means are combined across the levels of another factor
|
|
Main Effect
|
Difference on the DV across the levels of any one factor, controlling for all other factors in the experiment
|
|
Interaction
|
When the influence of one IV on the DV is different at different levels of another IV or variables
|
|
Simple Effect
|
the effect of one factor within one level of another factor
|
|
Each main effect and each interaction has its own
|
-F-test
-Degrees of freedom -p-value |
|
Understanding Interactions
|
-use a line chart
-levels of one factor are indicated on the horizontal axis -DV is represented and labeled on the vertical axis -points represent the observed mean on the DV in each of the experimental condition -lines connect the points indicating each level of the second IV |
|
Patterns with main effects only
|
lines are parallel
|
|
patterns with main effects and interactions
|
lines are not parallel
|
|
crossover interactions
|
An interaction in a 2x2 factorial design in which the two sample effects are opposite in direction
|
|
Interpretation of Main effects when interactions are present
|
when there is a statistically significant interaction between the two factors, the main effects of each factor must be interpreted with caution
-because the presence of an interaction indicates that the influence of each of the two IV cannot be understood alone. -Instead the main effects of each are said to be qualified by the presence of the other factor |
|
Three-way ANOVA
|
-involves 3 IV
-each factor has 2+ levels -Summary table includes *3 main effects *3 two-way interactions *1 three-way interaction |
|
The three-way interaction
|
**tests whether all 3 variables simultaneously influence the dependent measure.
when a 3-way interaction is found, the two-way interactions and the main effects must be interpreted with caution. --having IDed that the 3 IV's together influence the DV, we need to be careful if we remove one of the variables. |
|
Cells of a factorial design diagram
|
show the conditions in the design
|
|
2 x 2 factorial design
|
-all participants are completeing the same DV
-differences will be due to the influence of the IV |
|
two-way interaction
|
interactions that involve the relationship between two variables, controlling for the 3rd variable.
-- |
|
mixed factorial designs
|
some factors are between participants and some are repeated measures.
|
|
means comparisons
|
-conducted because a significant F value does not answer which groups are significantly different from eachother
-determine which group means are significantly different from eachother |
|
Pairwise Comparison
|
-any one mean is compared with any other condition mean
-may not be appropriate to conduct a statistical test on each pair of condition means because there may be very many |
|
Experimentwise Alpha
|
-the probability of having made a type 1 error in at least one of the comparisons
-as the # of comparisons increases, the experimentwise alpha also increases |
|
formula for experimentwise alpha
|
Ea = alpha x # of comparisons
|
|
Planned comparisons
(aka a priori comparisons) |
-only specific differences which were predicted by the research hypothesis are tested
-reduces experimentwise alpha |
|
Post hoc comparisons
|
-take into consideration that many comparisons are being made
-performed onlyif the F-test is significant -reduces experimentwise alpha |
|
Complex Comparisons
|
-more than 2 means are compared at the same time
-usually conducted with "contrast tests" -reduces experiment wise alpha |
|
contrast tests
|
statistical procedures used to make complex means comparisons
|
|
Correlational Research Designs
|
-used to search for and describe relationships among measured variables
|
|
Terminology for Correlational Designs
|
IV --> Predictor Variable
DV --> Outcome Variable |
|
Scatterplot
|
-uses a standard coordinate system
-horizontal = scores of predictor (IV) -verical = score of outcome (DV) -A point is plotted for each individual at the intersection of his or her scores on the two variables |
|
Regression Line
|
The straight line of "best fit" drawn through the points on a scatterplot
|
|
Regression Equation
|
Prediction of the DV from knowledge of one or more IV
Y=mx+b |
|
Linear Relationships
|
when the association between the variables on the scatterplot can be easily approximated with a straight line
|
|
Examples of Linear Relationships
|
1. height and weight
2. study time & memory errors |
|
Independent
|
When there is no relationship at all between the two variables shown on a scatterplot
|
|
Curvilinear Relationship
|
Relationships that are curved and change in direction shown on a scatterplot
|
|
Pearson Product-Moment Correlation Coefficient
|
-used to summarize and communicate the strength and direction of the association between two quanitative variables
-designated by "r" -values range from -1.0 to +1.0 ---direction indicated by sign -positive values = positive linear relationships -negative values = negative linear relationships -indexed by the absolute value distance of the value from 0 |
|
Interpretation of "r"
|
A significant r indicates there is a linear association between the variables
|
|
Coefficient of determination
|
-the proportion of variance measure for "r"
-r squared |
|
Restriction of Range
|
-occurs when most participants have similar scores on one of the variables being correlated
-the values of the coefficient is reduced and does not represent an accurate picture of the true relationship between the variables |
|
Reporting Correlations and Chi-squared statistics
|
ex. r(N) = #, p <##
N=sample size #= correlation coefficient ## = p-value of the observed correlation |
|
Multiple Regression
|
Statistical analysis procedure using more than one predictor variable (IV) to predict a single outcome variable
|
|
Regression Analyses Provide...
|
-multiple correlation coefficient (R)
-regression coefficients or beta weights |
|
Multiple Correlation Coefficient "R"
|
The ability of all the predictor variables together to predict the outcome variable
|
|
Regression Coefficients
or Beta Weights |
Indicate the relationship between each of the predictor variables and the outcome variable
|
|
Correlational Research Cannot...
|
-be used to draw conclusions about the causal relationships amog the measured variables
-correlation not equal to causation |
|
Reverse Causation
|
The causal direction is opposite what has been hypothesized
|
|
Reciprocal Causation
|
The two variables cause each other
|
|
Common-causal variables
|
variables not part of the research hypothesis cause both the predictor and the outcome variable
|
|
Spurious Relationship
|
The common-causal variable produces and "explains away" the relationship between the predictor and outcome variables
|
|
Extraneous Variables
|
Variables other than the predictor cause the outcome variable but do not cause the predictor variable
|
|
Mediating Variables
|
Variables caused by the predictor variable in turn cause the outcome variable
|
|
Longitudinal Research Design
|
-The same individuals are measured more than one time
-The time period between measurements is long enough that changes in the variables of interest could occur. |
|
Path Analysis
|
Correlational data from longitudinal research designs are often analyzed through a form of multiple regression that assesses the relationship among a number of measured variables
|
|
Path Diagram
|
The results of a path analysis can be displayed visually in this form of diagram
|
|
Cross-Sectional Research Designs
|
-measure people from different age groups at the same time
-very limited in their ability to rule out reverse causation |
|
Structural Equation Analysis
(SEM) |
Tests whether the observed relationship among a set of variables conform to the theoretical prediction about how those variables should be causally related
|
|
Latent Variables
|
The conceptual variables in a SEM.
-the analysis is designed to assess both the relationships between the measured and the conceputal variables and the relationships among the conceptual variables. -include both the IV & DV |
|
Latent Variables
|
The conceptual variables in a SEM.
-the analysis is designed to assess both the relationships between the measured and the conceputal variables and the relationships among the conceptual variables. -include both the IV & DV |
|
Chi-squared Statistic
X-squared |
-Must be used to assess the relationship between two NOMINAL variables
-technically known as the chi-squared test of independence -calculated by using a contingency table, which displays the # of individuals in each of the combinations of the two nominal variables |
|
Contigency Table
|
table that displays the # of individuals in each of the combinations of the two nominal values used in chi-squared test
|
|
Calculating chi-squared
|
-determine the expected frequency (fe) for each cell of the contigency table
|
|
Expected Frequencies (fe)
|
**Chi-squared testing**
-row marginal*column maringal/N |
|
Reporting of Chi-Squared statistics
|
x^2 (df, N= ) = #, p<##
df=degrees of freedom N=sample size # = chi-squared value ## = associated p-value |
|
Calcualating degrees of freedom for Chi-squared tests
|
between : # levels IV -1
within: N - # levels IV |