1. Research questions should be well supported by evidence and evidence needs to be meaningful and trustworthy. 2. Research or hypothesis testing should be carefully designed to rule out alternative explanations, and 3. The connections built between evidence and conclusions should be valid and reasonable.
For instance, if you need to find out whether or not providing students some metacognitive prompts immediately before reading can improve their reading comprehension, how would you like to set up an experiment to test this hypothesis (suppose you have a set of metacognitive prompts and you know that high school students usually take 30 minutes to review and contemplate these prompts)? Let’s say, you have two options:
Option 1: you recruit 80 high school students, and you randomly assign them into two groups � treatment vs. control group. Each group has 40 participants. Then, in the control group, you present a reading material introducing American history and give your participants 30 minutes to read it. Then you test their reading comprehension through 10 multiple-choice questions. In the treatment group, you present the same reading material and the same multiple-choice questions. But you ask the participants to contemplate the metacognitive prompts before they start reading. The design can be illustrated as:
Control: 1. read 2. test Treatment: 1. prompts 2. read 3. test Option 2: You recruit 80 high school students, and you randomly assign them into two groups � treatment vs. control group. Each group has 40 participants. In the treatment group, you ask participants to (1) contemplate the metacognitive prompts, (2) read the same material, and (3) work on the 10 multiple-choice questions; whereas in the control group, you ask participants to (1) read another irrelevant story for 30 minutes, (2) read the same material, and (3) work on the 10 multiple-choice questions. This procedure can be expressed as:
Control: 1. irrelevant reading 2. read 3. test Treatment: 1. prompts 2. read 3. test
You plan to compare participants reading comprehension scores to determine the effectiveness of presenting metacognitive prompts. Which option do you think has a better design?
I think option 2 is better. In option 1, participants in the control group has less task to complete than the participants in the treatment group. Thus, when students in the treatment already spend 30 minutes working on the prompts, by the time they have to read the material, they can be very tired, which will influence their performance in the reading comprehension test. Thus the first design DID NOT RULE OUT AN ALTERNATIVE FACTOR REGARDING FATIGUE. Whereas the second design takes this factor into consideration, and thus, its design is more scientific and the result yielded from this design (that is the evidence either supporting or not supporting the effectiveness of prompts) will be more trustworthy.
Research questions can be addressed through quantitative and/or qualitative approaches. But both require scientific rigor. The goal of conducting quantitative research is to generalize your hypothesis from your sample to the population through statistical testing. Using the same example, if you adopt the second data collection procedure, and then compare participants reading comprehension performance by using two-sample independent t-test, then you find that the participants in the treatment group show statistical advantages over the participants in the control group. In this case, you can argue that the population of high school students represented by your