Saturday, August 4, 2012

PRETEST - POSTTEST CONTROL GROUP DESIGN

Here we have added a control group to the one-group pretest-posttest design.  If we can assume that both groups experienced the same history between observations (that is, there is no selection by history interaction), then history is controlled in the sense that it should affect the O1 to O2 difference identically in the two groups. Likewise, maturation, testing, instrumentation, and regression are controlled  in the sense of having the same effects in both groups.  Selection and selection by maturation interaction are controlled by assigning subjects to the two groups in a way (such as random assignment) that makes us confident that they were equivalent prior to experimental treatment (and will mature at equivalent rates).  Unless we are foolish enough to employ different measuring instruments for the two groups, selection by instrumentation interaction should not be a problem.  Of course, testing by treatment interaction is a threat to the external validity of this design.
Statistically, one can compare the two groups’ pretest means (independent t or nonparametric equivalent) to reassure oneself (hopefully) that the assignment technique did produce equivalent groups -- sometimes one gets an unpleasant surprise here.  For example, when I took experimental psychology at Elmira College, our professor divided us (randomly, he thought) by the first letter of our last name, putting those with letters in the first half of the alphabet into one group, the others in the other group.  Each subject was given a pretest of knowledge of ANOVA.  Then all were given a lesson on ANOVA.  Those in the one group were taught with one method, those in the other group by a different method.  Then we were tested again on ANOVA.  The professor was showing us how to analyze these data with a factorial ANOVA when I, to his great dismay, demonstrated to him that the two groups differed significantly on the pretest scores.  Why?  We can only speculate, but during class discussion we discovered that most of those in the one group had taken statistics more recently than those in the other group -- apparently at Elmira course registration requests were processed in alphabetical order, so those with names in the first half of the alphabet got to take the stats course earlier, while those who have suffered alphabetical discrimination all of their lives were closed out of it and had to wait until the next semester to take the stats course -- but having just finished it prior to starting the experimental class (which was taught only once a year), ANOVA was fresh in the minds of those of us at the end of the alphabet.
One can analyze data from this design with a factorial ANOVA (time being a within-subjects factor, group being a between-subjects factor), like my experimental professor did, in which case the primary interest is in the statistical interaction -- did the difference in groups change across time (after the treatment), or, from another perspective, was the change across time different in the two groups.  The interaction analysis is absolutely equivalent to the analysis that would be obtained were one simply to compute a difference score for each subject (posttest score minus pretest score) and then use an independent samples t to compare the two groups’ means on those difference scores.   An alternative analysis is a one-way Analysis of Covariance, employing the pretest scores as a covariate and the posttest scores as the criterion variable -- that is, do the groups differ on the posttest scores after we have removed from them any effect of the pretest scores.  All three of these analyses (factorial ANOVA, t on difference scores, ANCOV) should be more powerful than simply comparing the posttest means with t.

Thanks

Ditulis Oleh : Unknown // 10:43 PM
Kategori:

0 comments:

Post a Comment

Enter your email address:

Delivered by FeedBurner

 
Powered by Blogger.