a. Inter-rater Reliability
Inter-rater reliability is defined as the measure of the consistency by which two or more observers evaluate the same data using the same research instrument or tool (Gillespie & Chaboyer, 2013). It shows how similarly observers score items and how similarly they categorise observations such as behaviours and events.
b. Test-retest Reliability
Test-retest reliability is defined as the measure of the ability of a research tool or instrument to produce consistent results when the same tool or instrument is administered to the same participants under the same conditions on two or more occasions. In determining the test-retest reliability, the scores on repeated testing are expressed as a correlation, …show more content…
Internal Consistency or Homogeneity
Internal consistency is defined as the relationship between all the results obtained using a single test or survey. Therefore, internal consistency is the degree to which a measuring instrument or tool measures the same construct and is expressed using the Chronbach's alpha coefficient (Tavakol & Dennick, 2011). The measure plays an important role in defining the consistency of the results and ensuring that the different items used to measure constructs deliver consistent results (Roberts & Priest, 2006).
d. Reliability Score of 0.38
Reliability score shows the measurement of error associated with a given test score. The score ranges from 0.00 to 1.00 with more reliable tests having a higher score. Therefore, a reliability score of 0.38 would mean that the test is not reliable thereby diminishing the confidence in the test tool.
2. Validity
a. Content Validity
Content validity is a test used to assess the ability of an instrument to comprehensively capture all the contents or domains of a particular construct. The tests plays an important role in determining the validity of a tool to measure all the contents of a given construct (Gillespie & Chaboyer, 2013).
b. Construct …show more content…
The validity assists in establishing the construct validity when using two different measurement protocol and methods of research (Trochim, 2006). ii. Divergent or Discriminant Validity
Divergent validity is the degree to which two measures of a construct with no relationship are indeed not related. Like convergent validity, this test also helps in establishing the construct validity when using two different measurement protocol and methods of research (Trochim, 2006).
c. Criterion-related or Concrete Validity
Criterion validity is the extent by which an assessment tool correctly relates a measure to an outcome. The validity reflects and shows the use of a criterion to create a new measurement procedure in measuring a particular construct (Lund Research, 2012).
i. Concurrent Validity
This is the extent by which an assessment tool correctly relates a measure and an outcome that are assessed at the same time. The test is used to assess the performance of a new tool in comparison with an established tool (Gillespie & Chaboyer, 2013). ii. Predictive