Choose an Evaluation Design
What is a Research Design?
A research design is simply a plan for conducting research. It is a blueprint for how you will conduct your program evaluation. Selecting the appropriate design and working through and completing a well thought out logic plan provides a strong foundation for achieving a successful and informative program evaluation. Building your evaluation on anything less intentioned and structured or organized may cause various unforeseen obstacles in your evaluation process
Research Design vs. Data Collection Method
As you continue working through the Evaluation Guide, please remember not to confuse research design with data collection methods. No specific research design must be accompanied by a specific data collection method. These are two distinct and separate concepts. You will select a data collection method after a design has been identified.
Selecting a Design
Before you decide on the most appropriate evaluation design, it is important that you are clear about the primary evaluation questions. Once you have defined the most important evaluation questions, there are several designs that may be able to adequately answer your evaluation question. You can select a specific design by considering the following:
- Which design will provide me with the information I want?
- How feasible is each option?
- How valid and reliable do my findings need to be?
- Are there any ethical concerns related to choosing a specific design?
- How much would each option cost?
Types of Research Designs
Below we describe four types of research designs that offer suitable options depending on your specific needs and research questions.
- Pre-experimental designs
- Experimental designs
- Quasi-experimental designs
- Ex post facto designs
Definitions
You can click on the tabs at the bottom of the page to learn more about each type of design, but below are some definitions that may be helpful before you move on.
- Dependent variable (DV) – A dependent variable is the primary variable of interest in a study. Researchers seek to determine how dependent variables are influenced by changes in independent variables.
- Independent variable (IV) – In an experiment, the independent variable is the variable being manipulated or changed. In non-experimental studies, independent variables are observed variables that may influence a variable of interest (the dependent variable).
- Treatment / intervention – In an experiment, the treatment or intervention is the main independent variable that is being manipulated. The treatment or intervention is something that only participants in the experimental group are given. Participants in the control or comparison groups do not receive the treatment or intervention.
- Treatment or experimental group – A group of study participants who have been exposed to a specific treatment of intervention.
- Control group – A group of study participants who have not been exposed to a particular treatment. The term is typically used in experimental designs with random assignment.
- Comparison group – A group of study participants who have similar attributes and characteristics as a treatment or experimental group. This term is typically used in quasi-experimental designs where random assignment has not been used.
- Pretest – A test administered prior to a specific treatment or intervention. This provides a baseline measure that can be compared to subsequent tests taken after an intervention or treatment.
- Posttest – A test administered after a specific treatment or intervention. A posttest can help determine how study participants have responded to a treatment or intervention.
- Randomization (random assignment) – The process of randomly placing study participants in a treatment or control/comparison group.
Pre-experimental designs are the simplest type of design because they do not include an adequate control group. The most common pre-experimental design is the pretest/posttest design. A pre- and post-intervention design involves collecting information only on program participants. This information is collected at least twice: once before participants receive the treatment (baseline information) and immediately after participants receive the treatment.
A pretest/posttest design can be effective for evaluating:
- Changes in participants’ knowledge (e.g. about college or financial aid)
- Changes in participants’ attitudes towards college
- Changes in participants’ grades and test scores
This type of design is the least rigorous in establishing a causal link between program activities and outcomes. However, findings using this design may be enough to indicate your program is making a difference depending on how rigorous the proof needs to be, proximity in time between the implementation of the program and the progress on outcomes, and the systematic elimination of other alternative explanations.
Characteristics of Pre-Experimental Designs
- Not an authentic experimental design
- Design does not control for many extraneous factors
- Subject to many threats to validity
- Typically conducted for exploratory purposes
- Usually convenient and financially feasible
The three types of pre-experimental designs are:
- The one-shot case study
- A one group, pretest / posttest study
- The static group comparison study
The image below provides more specific insight on these designs.
Image taken from: http://allpsych.com/researchmethods/preexperimentaldesign.html
If you need more substantial evidence, the pretest/posttest design is not recommended. The best evidence can be achieved through an experimental design, the “golden standard” in program evaluation and research. A good experimental design can show a casual relationship between participation in your program and key student outcomes. The key to this design is that all eligible program participants are randomly assigned to the treatment or control group. When random assignment is used, it is assumed that the participants in both the control and treatment groups have similar attributes and characteristics.
The purpose of a true experimental design is to control bias. In a true experiment, differences in the dependent variables can be directly attributable to the changes in independent variable and not other variables.
Characteristics of Experimental Design
- Research controls manipulation of the intervention or treatment
- Participants are random assigned to groups
- Intervention or treatment occurs prior to observation of the dependent variable
Strengths
- High internal validity
- Causal relationships between variables can be found
Limitations
- Limited external validity (generalizability) due to the controlled experimental environment
- Ethical concerns
The image below provides a model of several experimental designs. The important distinction to note is the “R” for random assignment.
Image taken from: http://allpsych.com/researchmethods/trueexperimentaldesign.html
If you are implementing a program in which random assignment of participants to treatment and control groups is not possible, a quasi-experimental design may be your best bet.
A quasi-experimental design is very similar to an experimental design except it lacks random assignment. Depending on treatment and comparison group equivalency, evidence generated from these designs can be quite strong.
To conduct a quasi-experimental design, you will need to identify a suitable comparison group (i.e., a group of individuals or families that are similar to those participating in your program and can monitored and tracked as comparison group).
Characteristics of a Comparison Group
- Members of a comparison group may receive other types of services or no services at all.
- A comparison group should be similar to the treatment group on key factors that can affect your outcomes.
-
Don’t assume that the two groups are completely similar. You may have to collect data to try and control for potential differences as part of your statistical analyses.
Strengths
- Enables experimentation when random assignment is not possible
- Avoids ethical issues caused by random assignment
Limitations
- Does not control for extraneous variables that may influence findings
The image below shows several examples of quasi-experimental designs.
Image taken from: http://allpsych.com/researchmethods/quasiexperimentaldesign.html
If you are unable to conduct and experimental or quasi-experimental design and already have access to good, organized, and detailed student data, you may want to consider and ex post facto research design. Ex post facto (“after the fact”) designs, also called causal-comparative designs, are non-experimental research designs that seek to determine the cause among existing differences. The ability to produce a quality evaluation with such as design is directly related to the quality and quantity of data readily available.
Characteristics of Ex Post Facto Designs
- The independent variable or treatment is not under the researcher’s control.
- The phenomenon of interest has already occurred at the time of observation or measurement.
- There is typically no control or comparison group.
Main weakness of design: You can’t determine causality due to inability to control for rival hypothesis or explanations. Essentially, your analysis will be limited to the data that is available.
Strengths
- You can investigate research questions that are inappropriate for experimental designs.
- These designes are typically more logistically and financially feasible.
- You can pay more attention to context instead of seeking to control variables and the environment.
These designs are particularly effective when (Krathwohl, 1998, p. 538)…
- Findings are based on a solid rationale that accurately predicted what to expect
- Variables were accurately operationalized
- Expectations (research hypothesis) were confirmed
- Findings cannot be otherwise explained
- Findings are consistent with previous research