Much controversy exists regarding proper methods for the selection of variables in confounder control. Many authors condemn any use of significance testing, some encourage such testing, and other propose a mixed approach. This paper presents the results of a Monte Carlo simulation of several confounder selection criteria, including change-in-estimate and collapsibility test criteria. The methods are compared with respect to their impact on inferences regarding the study factor's effect, as measured by test size and power, bias, mean-squared error, and confidence interval coverage rates. In situations in which the best decision (of whether or not to adjust) is not always obvious, the change-in-estimate criterion tends to be superior, though significance testing methods can perform acceptably if their significance levels are set much higher than conventional levels (to values of 0.20 or more).