Sept 2016 Advocate: From SLOs to Assessment & Accountability

FROM SLOs TO ASSESSMENT & ACCOUNTABILITY

Student learning outcomes & assessment:  Is the goal “return on investment” or a thoughtful population?

by Madeleine Murphy, CSM, SLO Coordinator

It’s been a little over a decade since CSM faculty gathered in the large back room of the old, tatty Building 5 to hear about a major new accreditation requirement. It went by the name of Student Learning Outcomes Assessment, and it consisted of three main components: defining exactly what knowledge, skills and abilities we wanted students to leave our courses with (SLOs), measuring what they’d actually learned (data collection), and using the data to look for possible improvements (assessment). Oh, and we had to document each step, so that a casual inquirer – say, a student, or a member of a visiting accreditation team – could see, at a glance, what we were up to.

Over the subsequent decade, SLOs have become a major feature of accreditation. They dominate Program Review, which now asks us to discuss curricular innovations by linking them to SLO assessments. Everything has defined outcomes, and the outcomes are all linked into a kind of institutional learning plan, a map of the student’s journey in which everything must have its appointed place. It’s not enough to say that successful MUS 100 students should end up knowing how to “write and recognize written major, minor, and perfect simple intervals;” how, exactly, does this contribute to their general education?

SLO requirements keep changing, and they’re about to change again. When ACCJC comes to visit in 2019, they will want to see that we are disaggregating our SLO data.  PRIE will do the actual disaggregating, of course, but to make this possible, faculty will need to collect student data pretty much like course grades: for every student, every semester, and with each result associated with a specific student G-number. For some of us, this is a game-changer.

So: it seems like a good idea, poised as we are at the brink of change, to take stock of SLOs as a whole. How have they played out at CSM? Where did they come from? And where should we go next?

I’m an optimist. We are a dynamic college, bristling with initiatives that promote student learning. I’m sure we can make SLOs more meaningful, and less onerous. But before we get to visions for a brighter future, we need to take an honest look at where we are.

CSM faculty experiences with SLOs

When I took over as CSM’s SLO coordinator in Spring 2015, I wanted to hear from as many faculty as possible about their experiences with SLOs. I knew what we in the English department thought about them (as useful as a chocolate teapot, and three times as much work to keep them intact). But maybe we were outliers.  So I interviewed thirty-four faculty SLO contacts, and here’s what I found: with a very few exceptions, we have gotten next to no use out of SLOs.

The reasons won’t surprise most readers. First, it turns out that data about student learning doesn’t tell us anything we don’t already know. We seemed to be trying everything: surveys, group grading, capstone assignments, pre-and post-quizzes – and yet almost no one had found a use for SLOs.

This was, in fact, how many of us had felt at the meeting ten years ago when SLOs were first mooted. The whole process felt redundant. We already had SLOs, having revised our course outlines years earlier, so that the objectives were expressed as student learning outcomes. As for collecting data about student learning – it’s called “grading,” and represents around forty percent of our work. Did ACCJC think we never gave a quiz? And we’d always assessed and overhauled our programs. It wasn’t clear what was new here.

SLOs don’t lead to “success stories”

Ten years later, it still isn’t. As coordinator, I recently completed an ACCJC update document which asked, amongst other things, for some “SLO success stories.”  I wanted to write about some of the wonderful initiatives that had come about because someone identified a student need, got administrative support, and saw it through: Puente, Project Change, Umoja, Mana, Writing In The End Zone, teaching circles, the learning support centers, the Basic Skills initiative, the Honors Project, Summer Bridge, Family Science Day, Year One – and those are just the ones I happened to know about; no doubt there are many more. What were all these activities, if not creative ways to engage students of all kinds, and promote interdisciplinary learning? But not one of these arose out of a study of learning outcomes data. All SLO data can do is help us spot who isn’t learning what, and frankly, this is usually pretty obvious already.

Second, most frustratingly, SLOs have taken up thousands of hours. They’re hard to write and difficult to measure, and each revision triggers a new passage through Committee on Instruction. They don’t always map clearly; in some programs, like English, everything is connected to everything else, so it’s not so much a map as a spiderweb. Other disciplines, like Music or Art, have to make a fake case for their subject by claiming spurious connections to GE-SLOs. Faculty SLO contacts spend hours hassling colleagues for data that no one gets any use out of. This all takes so much time – which means less time for students, peer mentoring, research, and the curricular revision that SLOs are supposed to facilitate.

For many courses, SLOs don’t make sense

Finally, we don’t all have the same kind of “outcomes.” SLOs suit some disciplines, usually ones where faculty are measuring something pretty specific: Spanish, or real estate law. (Though faculty in most of these disciplines already have better ways of gauging their effectiveness, like success in licensure exams.) But for many courses, SLOs don’t really make sense at all. Many disciplines promote attitudes that can’t be measured at the end of a semester – a love of art, or civic-mindedness, or appreciation of diversity. A lot of teaching is like planting a seed: who knows when it will bloom? Ethical and critical thinking are habits of mind, not skills like driving or speaking French, and we don’t approach them by discrete, measurable steps.

Overall, then, faculty pretty much dislike SLOs, and for excellent reasons. They contribute nothing, they reduce the scope of education, and they take up valuable time. But a few faculty did report good experiences at other campuses; and even in English, we did find SLOs a useful way to streamline the curriculum. There is something, perhaps, to build on.

Problems and solutions in higher education: Assessment and accountability

So – where did all this come from?

We complain about ACCJC. But they didn’t invent SLOs; the SLO requirement represents a compromise between different movements in higher education, all focused on the need for colleges to reform the way they assessed their work.

It was teachers, in fact, who first expressed dissatisfaction with the status quo. College, they pointed out, with its fragmented general education curriculum, is training students in “bulimic learning.” Students rush from one required course to another, cramming and regurgitating information, retaining almost nothing. This isn’t learning at all, but a gesture, a ritual representation of learning. Faculty teach in their separate silos; what students learn in one course is rarely reinforced in another; and thus, students don’t end up with a coherent educational experience. Their credits remain a heap of lights, some bright and some dark, but never strung together to shed some permanent light on the mind. Unsurprisingly, students disengage, looking for any available shortcut to a decent grade. The ultimate expression of disengagement is cheating, and this is rife. Glum studies, like Arum and Roksa’s 2003 book Academically Adrift, revealed that most students make almost no measurable improvements in their first two years of college.

All of this rings true to most of us. We’ve all taught students who not only didn’t remember anything from their previous classes, but couldn’t remember the instructor’s name. (“Um, he was tall – and I think he had a beard?”) Many students seem fixated on the idea that each discipline is its own thing; what happens in ECON 101, stays in ECON 101. They are often surprised, and sometimes a bit put out, when faculty introduce other ideas into class discussions. I’ve read student evaluations where students commented, disapprovingly, that their teacher was really teaching history, or political science, or biology, and not English like he or she was supposed to. Having been educated in an entirely different system in the U.K., I have always found something a bit bizarre in the way students compile their general education curriculum. They look like people in a Weight Watchers program putting together a meal: three points of humanities, two points of math/science, two points of arts, and one P.E.  What a way to approach one’s education! All my friends and I asked ourselves, in the U.K., was, “What subject do I really like?”

Assessment leads to collaboration

These concerns sparked the assessment movement, and it focused on goals most of us would probably approve of: institutional clarity, coherence in courses and programs – but most of all, collaboration. Assessment evangelists stressed the need for faculty to emerge from their classrooms and make connections, to work together, so that what students learn in Philosophy 100 continues to enrich their understanding of what they go on to study in Political Science, or Math, or Nursing. This is what the assessment movement meant by outcomes: yes, Frank got an A in your class, but what has he taken with him?

In fact, we have a history at CSM of fostering this kind of interdisciplinary collaboration. We have, as noted earlier, a lot of learning communities; an Interdisciplinary Studies department, currently home to the Honors seminar; a Center for Academic Excellence, which encourages initiatives by faculty, but also staff and administration, to “enhance pedagogy and student support through innovation and collaboration.”  Everyone I spoke to was enthusiastic about the idea of getting together with colleagues, from other departments and disciplines, to work on creative approaches to improving student learning.  (Many departments consist of only one full-time or adjunct faculty, so they’re especially keen on joint activities.) Of course, the problem is finding the time. Assessment itself, however, seems very appealing.

But SLOs don’t really feel like assessment, do they? They focus, it seems, on hard results, on quantifiable and measurable improvements. They emphasize the magic of data, with the implication that unless we can express it as a percentage, our professional judgement and experience doesn’t count. Trying to evaluate our work as teachers using SLO methods sometimes feels like using a measuring tape to figure out the health of your marriage.

Fears of an “education crisis” lead to the need for “accountability”

This is because academics weren’t the only ones worried about higher education. In the culture at large, the perception has been growing, for the last fifteen years, that education – K12, but also college – is in crisis. The conversation goes something like this: Our education system used to be great, but has recently declined dramatically. Faculty are listlessly waving through students who can’t read, write or think; accreditation agencies are waving through institutions that aren’t preparing students adequately. Unless we start insisting on some real results, and hold our institutions accountable, our economy – perhaps even this great Republic – is doomed.

Like a herpes virus, fears about public education seem always to lurk somewhere in the public discourse. When conditions are right – when the Japanese start making cars, or the Chinese start making everything else, and we stop looking like Number One – these anxieties break out, causing us all pain and dismay. The most recent flare-up really began with the George W. Bush administration. After raking K12 education over the coals with “No Child Left Behind,” his Secretary of Education, Margaret Spellings said that it was “time we turn this elephant [higher education] upside down and take a look at it.”

In 2006, the Spellings Commission released a report entitled A Test of Leadership: Charting the Future Of U.S. Higher Education. Like the college professors who had spearheaded the assessment movement ten years earlier, the Commission concluded that students weren’t learning. But its prescription focused on accountability. Faculty must test regularly, using as objective methods as possible, whether students really are learning what they are supposed to learn, and use this data for planning and improvement. Teachers themselves need to be more open to innovation and change. Like steel manufacture or railroads before them, the Commission observed, higher education was an industry that “has become what, in the business world, would be called a mature enterprise: increasingly risk-averse, at times self-satisfied, and unduly expensive.” In other words: Wake up and smell the globalization. Teachers are falling down on the job, and need to have their feet held to the fire.

But we weren’t the only ones not doing our job properly. Accreditation agencies, too, came in for severe criticism. Accreditors were looking at inputs, like student-to-teacher ratio, governance procedures, and policies. What did any of this matter, if students weren’t learning anything? Instead of operating as gatekeepers for quality higher education, most accreditation agencies had become, as Arne Duncan said, “the watchdog that doesn’t bite.” So accreditation agencies, too, had to be held accountable for holding us accountable. It was the Department of Education that instituted the two-year rule, a mandate that requires colleges to correct deficiencies within two years. And agencies that did not get ready to sink their teeth into uncooperative institutions could expect dire consequences. “At risk,” said Barbara Beno, defending some of ACCJC’s unpopular verdicts, “is the commission’s recognition.”  We think of accreditors as peer evaluators, but in Washington D.C., they are expected to act as police. Even the impeccably progressive Elizabeth Warren likened accreditors to the pre-2008 SEC – too cozy with the sector they are supposed to hold accountable.

Corporations demand “return on investment”, not a thoughtful, informed population

Teachers are deeply suspicious of this kind of rhetoric, and they’re right. It reflects a bottom-line approach, a desire to see quantifiable and immediate results from learning, and a readiness to blame teachers for not achieving the impossible. It carries with it an implicit definition of a college education: as a kind of manufacturing process, one that takes raw students through specific and definable steps, and turns them into participants in the global economy. It sees education not as an end in itself, but as a means to an end. It comes from politicians and parents who demand ROI (the magical “return on investment”) on tuition dollars, and from the students themselves, most of whom come to college because they have been told that it is the only path to a better paid job. And by “ROI,” no one means a life-long interest in Persian poetry, or a clearer public understanding of science, or a thoughtful, well-informed population. They mean a better paid job.

None of this, unfortunately, is going away. We see it all over the culture; everyone seems to accept that the goal of education is to train the workforce. A Harvard professor reports students leaving her seminar on history and literature, having been texted by angry parents who insist that the students not waste their time. Business-related majors now account for over one-third of majors, and it’s not because one third of the student body plan to open a business. Many in the business sector feel that they have a proprietary interest in public education, because, as one businessman argued, “businesses are the primary consumers of the output of our schools, so it’s a natural alliance.”  (The veteran teacher and education blogger, Peter Greene, calls this the “wrongest sentence ever” in education reform debates. “Students are not output…. Students are not consumer goods…. the purpose of education is NOT simply to prepare young humans to be useful to their future employers.”) Private corporations and philanthropists have become passionately interested in education. No surprise that of the nineteen people on the commission, only six were professors or college administrators, while most of the others came from large corporations (from Boeing, IBM, and the ubiquitous Microsoft) or edupreneurs like Kaplan Learning. No surprise, too, that the Commission emphasized the need to make room for the kind of “innovations” that offer opportunities for private interests to get involved in the dollar behemoth that is education.

What we can do?

Here’s one thing we can’t do: We can’t throw our keyboards to the ground and start an “Occupy Tracdat” movement. The call for accountability and assessment didn’t come from our own administrators, nor even from ACCJC, but from the United States Department of Education, and the culture standing behind it. So whatever happens to ACCJC, SLOs aren’t going away.

But we can, and should, do a lot.

Make collecting data easier

First, at CSM at least, we can make this process much less of a chore. The biggest source of grief right now is the way we collect and record SLO data. Surely we can improve this. We collect data on learning all the time – it’s  part of our job. How we record this data, and store it in a database, is an administrative problem we can solve as we go. Right now, though, there must be ways we can use what we already do, or implement painless ways to collect data.

Build a culture of assessment

Second, we can build a culture of assessment. Perhaps we could have a set day in the calendar, each semester, set aside for collaborative projects? We’ve got a new division to provide academic support for this kind of interdisciplinary, inter-constituency project (Academic Support and Learning Technologies). Maybe ASLT could put out a newsletter featuring some of the more noteworthy projects. I’d love to hear more about what my colleagues are doing. I expect I don’t know the half of it. These are just some possibilities we’re going to think about at CSM.

There are two important reasons for embracing assessment. First, the kinds of things we’re talking about are useful, and meaningful, and fun. Focusing on assessment would allow us to put our many existing activities, like learning communities and so on, squarely in their proper context: activities that support and enhance student learning.

What is college for?

But it’s also our way of taking some ownership back over education. To define an outcome is, by extension, to define what we mean by learning. The real question, at bottom, is this: What is college for? The government, the business sector, even students seem to think college exists solely to train the workforce, and improve our nation’s economic standing. But we can offer a different answer, I think. We can tell much better stories about what college can do for our students.

Here’s one of my favorites. In an article a couple of years ago in The New York Times, the actor Tom Hanks looked back at his two years at Chabot College. By the current standards of SLO assessment, he didn’t fare too well. He dropped classes he wasn’t prepared for, endured classes he loathed, and spent a lot of time goofing around trying to pick up girls. But he also picked up unexpected benefits: riveting lectures, a strategy for making outlines, an ability to speak in public, and other bits and pieces which “rippled through [his] professional pond.” The college, it turned out, changed his life in ways he could not have anticipated. “I drove past the campus a few years ago with one of my kids,” Hanks concluded, “and summed up my two years there this way: “That place made me what I am today.”