Decoding Learning Gains: Measuring Outcomes and the Pivotal Role of the Major and Student Backgrounds

Decoding Learning Gains: Measuring Outcomes and the Pivotal Role of the Major and Student Backgrounds by Gregg Thomson and John Aubrey Douglass. CSHE.5.2009 (May 2009)

Download PDF document

Abstract:

Throughout the world, interest in gauging learning outcomes at all levels of education has grown considerably over the past decade. In higher education, measuring “learning outcomes” is viewed by many stakeholders as a relatively new method to judge the “value added” of colleges and universities. The potential to accurately measure learning gains is also viewed as a diagnostic tool for institutional self-improvement. This essay compares the methodology and potential uses of three tools for measuring learning outcomes: the Collegiate Learning Assessment (CLA), the National Survey of Student Engagement (NSSE), and the University of California’s Undergraduate Experience Survey (UCUES). In addition, we examine UCUES 2008 responses of seniors who entered as freshmen on six of the educational outcomes self-reports: analytical and critical thinking skills, writing skills, reading and comprehension skills, oral presentation skills, quantitative skills, and skills in a particular field of study. This initial analysis shows that campus-wide assessments of learning outcomes are generally not valid indicators of learning outcomes, and that self-reported gains at the level of the major are perhaps the best indicator we have, thus far, for assessing the value-added effects of a student’s academic experience at a major research university. UCUES appears the better approach for assessing and reporting learning outcomes. This is because UCUES offers more extensive academic engagement data as well as a much wider range of demographic and institutional data, and therefore an unprecedented opportunity to advance our understanding of the nature of self-reported learning outcomes in higher education, and the extent to which these reports can contribute as indirect but valid measures of positive educational outcomes. At the same time, the apparent differences in learning outcomes across the undergraduate campuses of the University of California without controls for campus differences in composition illustrates some of the limitations of self-reported data.