Dean Diehl and his team are perfectly willing to be held accountable for concrete results, so long as they get to define how those results are measured. Diehl et al claim that the most important measure of quality in higher education is the six-year graduation rate. But what does the awarding of a BA degree really mean? Who certifies its quality? The very people who are getting rich at the public expense by providing that very degree! (Dean Diehl earns $270K per annum.) If you think that makes sense, I have a housing development in Nevada that I’d like to sell you.
What is a bachelor’s degree? It is an entirely arbitrary boundary. Does it take exactly 120 semester credits (40 courses) to become a competent engineer? And, by sheer coincidence, we’re supposed to believe, it takes exactly forty courses to be a competent art historian, economist, pharmacist and accountant? Exactly forty courses are required, no matter what the field? Perhaps in some cases and for some students, forty courses are better than thirty, but for many other students, twenty or thirty courses may be plenty. When a student “drops out” of college, he or she may do so for perfectly good reasons. In many cases, the courses and requirements that remain to be completed for the B.A. are just not worth the additional time and cost.
Why have Diehl and other university administrators proposed this form of accountability? Because they know that they can raise the graduation rates without disturbing the status quo in the slightest. They don’t need to make professors work harder or smarter at teaching. All they have to do is encourage them to drop academic standards still further. If every student earns an A in every course, it’s pretty easy to get to a 100% graduation rate, if you permit students to take up to six years. Especially if the administrators bribe the students to stay with better food, bigger dorm rooms, fancier swimming pools and rec facilities, more exciting athletics programs. Our colleges are turning into little Club Meds for twenty-somethings, with education as an optional extra, burdening the graduates with a yoke of inescapable student debt as the price for six years of fun and games.
We must hold state universities accountable, but not for something as intrinsically meaningless as graduation rates. Instead, let’s measure what students learn by means of a direct measure of what they have learned. In other words, what’s needed is some kind of exit exam. But, our experience with the TAKS exams in K-12 education has been dreadful, a colossal failure. The TAKS standards reward those schools that are able to get almost all students over a very low threshold. “Teaching to the test” is bad because the test itself is bad, measuring only minimal competency.
We need a different kind of exit exam for higher education. There is a wonderful model to follow, one tested by the centuries, proven by experience: the end-of-course or “Honours School” examinations at Oxford and Cambridge. These examinations are all essay exams, created and graded by university faculty members.
How is this proposal different from the status quo? The difference is simple, but crucial: all of the grading is carried out in a “double blind” fashion. The graders don’t know the identities of the exam-takers, and the exam-takers don’t know the identities of the examiners. Each exam is graded by two or three examiners. In Texas, the exams could be conducted at the System level (one set of exams for all students in the UT system another for all in the A&M System, and so on), with all students studying a given subject (like chemistry or economics) at any System campus being tested simultaneously.
The exams should take place at two stages: after the first or second year, covering the general education, common core subjects (English composition, algebra, American government, American history, fundamentals of natural and social science, fine arts and humanities), and then again just prior to graduation, in the student’s major subject.
The exam results could be used to create something like Oxford’s Norrington Table: a table published each year, ranking the various campuses by exam results in each subject. Which campus did best in physics, history, management, and so on? In addition, the results could be used to evaluate the quality of the curriculum. Which elective courses contributed most to students’ success? Finally and most importantly, the results can be used to evaluate the quality of instructors. Which instructors contributed most to the success of their students, controlling for the students’ abilities upon admissions (as measured by scores on standardized tests, class rank, high school GPA, etc.)? The measure of the “value added” by each instructor would replace the unreliable and easily manipulated student evaluations as the key measure of teaching success.
The costs for these exams would be minimal. Tests could be written and graded by existing faculty members, as part of their professional responsibilities. Tests could be taken in existing computer labs, with tests sent electronically to examiners (with student identities masked). Zero net cost, but huge benefits.
Exit exams of this kind would radically transform the culture on campus. UT-Austin would change overnight from the country’s number 1 party school to the number 1 study school. College teachers would be forced to shift their energies and their ingenuity from useless research to effective teaching. The faculty in each subject area would have to consult with one another and define precisely the skills, knowledge and concepts that constitute competency in that subject. Texas was be propelled overnight into the vanguard of higher education reform.