General guidelines for interpreting the Student Instructional Rating Survey data:
- Smaller enrollment classes tend to have higher ratings than larger enrollment classes, but the differences are often statistically insignificant. Classes with less than 15 students received the highest ratings. Centra and Creech (1976) studied five thousand classes and found that smaller classes have higher ratings. Although the correlation (-0.09) between class size and rating is significant, it is also very small. Therefore the validity of the ratings is not significantly affected by class size (Williams and Ory, 1992). Centra and Creech also found that frequently very large classes were highly rated also. This is possibly due to the instructor’s skill in teaching large groups, the instructor’s special efforts to make materials clear, or opportunities to discuss the course content in small groups in recitation sections or online discussions.
- Results with low response rates should be treated with caution. If there are fewer than 10 responses, the data needs to "be interpreted with particular caution" (Cashin, 1995). Likewise, so would any course in which the response rate was less than half of the enrollment.
- Student motivation matters, but it is very hard to control for. (Question #8. "I had a strong prior interest…) Both size of class and motivation of students can contribute to bias, but size and motivation of the students may not be under the control of the instructor. Instructors who are teaching large classes of unmotivated students, for instance the large required introductory classes, may be rated lower than instructors who are teaching small classes of motivated students, all other things being equal.
- There are regularities in the data based on disciplinary areas. Many research reports have shown that certain disciplines tend to be rated more highly than others. Arts and humanities courses are rated higher than social sciences and biological sciences, which are rated higher than business and computer sciences, followed by mathematics, engineering and physical sciences.
- Improved student learning or students’ perception that they have "learned a lot in the class," improve the ratings. (Question #7, "I learned a great deal…) Since a primary purpose for teaching is student learning, there is also a high positive correlation between student ratings and student learning.
Braskamp, L.A., and Ory, J.C. 1994. Assessing Faculty Work. San Francisco. Jossey-Bass Publishers.
Cashin, W.E. (1990). Students do rate different academic fields differently. In Theall, M. and Franklin, J. (Eds.), Student Ratings of Instruction: Issues for Improving Practice. New Directions for Teaching and Learning. no. 43. San Francisco: Jossey-Bass.
Cashin, William E. and Sixbury, Glenn R. (1992) Comparative Data by Academic Field. IDEA Technical Report No. 8, Kansas State University.
Cashin, W.E. (1995). Student Ratings of Teaching: The Research Revisited. IDEA Paper No. 32, Center for Faculty Evaluation & Development, Kansas State University, September. Retrieved from http://www.theideacenter.org/sites/default/files/Idea_Paper_32.pdf
Centra, J.A., and Creech, F.R. (1976). The Relationship Between Students, Teachers, and Course Characteristics and Student Ratings of Teacher Effectiveness. Princeton, N.J.: Educational Testing Service.
Centra, J.A. (1994). Current Issues in Evaluating and Improving College Teaching. Paper presented at the annual meeting of the American Educational Research Association (AERA) meeting in Atlanta, April.
Feldman, K.A. (1978). Course characteristics and college students' ratings of their own teachers: what we know and what we don't. Research in Higher Education, 9, 199-242.
Feldman, K.A. (1989). The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multi-section validity studies. Research in Higher Education, 30:583-645.
Williams, R. and Ory, J.C. 1992. A Further Look at Class Size, Discipline Differences and Student Ratings. Unpublished manuscript, Office of Instructional Resources, University of Urbana-Champaign.