Tuesday, 6 September 2011

Uses of Factor Analysis in Scale Development and Validation

1. Item analysis. Factor analysis can be used to create subscales of items in a test. For example, in a job satisfaction scale, we might find several different factors corresponding to satisfaction with the work itself, supervision, pay and so forth. We could use the analysis to delete items based on the following criteria:

1. Low final communality (fails to load highly on any factor).

2. Small loading on proper factor (e.g., an item from the work scale doesn't load on the work factor).

3. Large loadings on the wrong factor (e.g., and item from the work scale loads highly on the supervision factor).

Some people advise us to avoid using factor analysis on items for several reasons. One reason is that we often get factors that correspond to characteristics of the distribution of responses rather than content. For example, we may get factors that correspond to easy and hard items. We may get factors of positive and negative items just because a few people missed the NOT in some of the negative items. Another reason is that the distribution of responses and errors cannot be normally distributed (even approximately) with variables that only have 9 or less possible values. This matters if we are going to use maximum likelihood estimates or significance tests. In my opinion you certainly have to watch out for bogus factors. However, when the factors correspond to meaningful content differences, factor analysis presents a very powerful tool for creating multiple scales with high internal consistency and good discriminant validity. High internal consistency will result if you choose items that all have high factor loadings on the same factor (there is a mathematical relation between the loadings and alpha). If you delete items that load on the wrong factor, you promote discriminant validity.

2. Scale validation. When we have developed tests, we can factor analyze a series of test to see whether they conform to the expected pattern or relations. This is, of course, relevant for construct validation. We expect to see that test that purport to measure the same construct should load on the same factor, and that different factors should emerge for different constructs.

Lots of people have factor analyzed MTMM matrices. A matrix that conforms to the Campbell and Fiske criteria will show factors that correspond to traits. Method variance will show up as method factors. Messy factors correspond to other measurement problems.

kartik prakash (FIN grp-6)

13139

No comments:

Post a Comment