Sunday, November 6, 2011

Article 417: Eighteen Types of Validity in Doing Research

Validity ofresearchis important for more adequately constructed steps and measures to give valid results. In the following you become acquainted with knowledge of almost all types checked in research. There are more than a dozen validity types. Here they are you find a dozen and half:

1.Construct validity: It refers to the extent to which operationalizations of a construct (e.g. practical tests developed from a theory) do actually measure what the theory says they do. For example, to what extent is an IQ questionnaire actually measuring "intelligence"?

Construct validity evidence involves the empirical and theoretical support for the interpretation of the construct. Such lines of evidence include statistical analyses of the internal structure of the test including the relationships between responses to different test items. They
also include relationships between the test and measures of other constructs. As currently understood, construct validity is not distinct from the support for the substantive theory of the construct that the test is designed to measure. As such, experiments designed to reveal aspects of the causal role of the construct also contribute to construct validity evidence.

2. Convergent validity : It refers to the degree to which a measure is correlated with
other measures that it is theoretically predicted to correlate with.

3. Discriminant validity : It describes the degree to which the operationalization does
not correlate with other operationalizations that it theoretically should not be correlated with.

4. Content validity: is a non-statistical type of validity that involves "the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured" (Anastasi & Urbina, 1997 p. 114). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature? Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves subject matter experts (SME's) evaluating test items against the test specifications. A test has content validity built into it by careful selection
of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcraft et al. (2004, p. 49) note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a
representative sample of the behavior domain.

5. Representation validity, also known as translation validity, is about the extent
to which an abstract theoretical construct can be turned into a specific practical test.

6. Face validity : is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Indeed,
when a test is subject to faking (malingering), low face validity might make the test more valid. Face validity is very closely related to content validity. While content validity depends on a theoretical basis for assuming if a test is assessing all domains of a certain criterion (e.g. does assessing addition skills yield in a good measure for mathematical skills?

To answer this you have to know, what different kinds of arithmetic skills mathematical skills include face validity relates to whether a test appears to be a good measure or not. This judgment is made on the "face" of the test, thus it can also be judged by the amateur. Face validity is a starting point, but should never be assumed to be provably valid for any given purpose. It is a type where judgment is irrational.

7.Criterion validity: is an evidence involves the correlation between the test and a
criterion variable (or variables) taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion). If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data is collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence.

8. Concurrent validity: It refers to the degree to which the operationalization correlates with other measures of the same construct that are measured at the same time. Returning to the selection test example, this would mean that the tests are administered to current employees and then correlated with their scores on performance reviews.

9. Predictive validity: refers to the degree to which the operationalization can predict (or correlate with) other measures of the same construct that are measured at some time in the
future. Again, with the selection test example, this would mean that the tests are administered to applicants, all applicants are hired, their performance is reviewed at a later time, and then their scores on the two measures are correlated.

10. Experimental validity is The validity of the design of experimental research studies
is a fundamental part of the scientific method, and a concern of research ethics. Without a valid design, valid scientific conclusions cannot be drawn. There are several different kinds of experimental validity.

11. Conclusion validity is one aspect of the validity of a study is statistical conclusion validity - the degree to which conclusions reached about relationships between variables are justified. This involves ensuring adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures. Conclusion validity is only concerned with whether there is any kind of relationship at all between the variables being studied; it may only be a correlation.

12. Internal validity is an inductive estimate of the degree to which conclusions about causal relationships can be made (e.g. cause and effect), based on the measures used, the research setting, and the whole research design. Good experimental techniques, in which the effect of an independent variable on a dependent variable is studied under highly controlled conditions, usually allow for higher degrees of internal validity than, for example, single-case designs.

13. Intentional validity is to what extent did the chosen constructs and measures adequately assess what the study intended to study?

14. External validity concerns the extent to which the (internally valid) results of a study can be held to be true for other cases, for example to different people, places or times. In other words, it is about whether findings can be validly generalized. If the same research study was conducted in those other cases, would it get the same results?

15.Ecological validity is the extent to which research results can be applied to real life situations outside of research settings. This issue is closely related to external validity but covers the question of to what degree experimental findings mirror what can be observed in the real world (ecology = the science of interaction between organism and its environment). Ecological validity is partly related to the issue of experiment versus observation. Typically in science, there are two domains of research: observational (passive) and experimental (active). The purpose of experimental designs is to test causality, so that you can infer A causes B or B causes A. But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influencea child's cognitive functioning?). Then you can still do research, but it's not causal, it's correlational. You can only conclude that A occurs together with B. Both techniques have thei r strengths and weaknesses.

16.Diagnostic validity: In clinical fields such as medicine, the validity of a diagnosis, and
associated diagnostic tests or screening tests, may be assessed. In regard to tests, the validity issues may be examined in the same way as for psychometric tests as outlined above, but there are often particular applications and priorities. In laboratory work, the medical validity of a scientific finding has been defined as the 'degree of achieving the objective' - namely of answering the question which the physician asks. An important requirement in clinical diagnosis and testing is sensitivity and specificity - a test needs to be sensitive enough to detect the relevant problem if it is present (and therefore avoid too
many false negative results), but specific enough not to respond to other things (and therefore avoid too many false positive results). www.askdryahya.com


0

No comments:

Post a Comment