Friday, February 5, 2010

Learners as Subjects, Learning as Process


I had a recent discussion thread going with colleagues from my discipline (anthropology) who were raising and struggling with the practical question “how do I keep students from using their cell phones in class?” Related questions were raised about other similar technologies—laptops, ipods, etc.

My first reaction in the thread was to ask “Why would you want to do this?!!" (That is, to keep them from using such tools) But as I tried to engage people in conversation about rethinking “class” and “teaching,” and “learning” in creative ways that *incorporated* people’s technologies of choice rather than treating them as disruptions, I was met with many practical considerations (enormous class sizes, curricular demands to “cover” sufficient materials, the ‘digital divide’ among our students etc.); the discussion itself seemed to speak volumes about how far “we” had *not* come in any paradigm shift (“From Teaching to Learning,” a la Barr and Tagg, 1995, or in addressing the learning-needs of the “digital natives”) (Barr and Tagg).

It seems in fact that until this "paradigm shift" is embodied in what we are seriously doing in our profession, the place of assessment of learning—and further, the adoption of tools such as portfolios (e- or otherwise)—will suffer from severe misunderstanding and perennial fits and starts of eventual dead-end use.

What we are talking about in our discussions around learning-focused approaches in education is at heart a call for engaging one another (teachers/students) as learning subjects. Before we ever get to the questions about any particular techniques or tools or technologies, we are challenged to approach our educational environments and institutions as places to engage one another as active subjects in our own learning processes. This in turn does not call for the disappearance of ‘experts’ but their/our engagement instead as facilitators, guides, those with prior cultivated and informed experience, who can challenge other subjects in our mutual efforts to learn together. (Lave; Smith)

In this process then, “assessment” isn’t something done *to* learners (or *to* teachers), but instead represents ongoing moments in process—moments of (guided) self-reflection in the process(es) of learning. Likewise, tools such as portfolios (e or otherwise) are the artifact-occasions for such self-reflection. They fall flat if they are only somehow “scored” or “graded” by the “experts.” They likewise fall flat if they are only “commented on” by their creators (supposedly showing that we have somehow “included students in their own assessment”). They offer so much more to all involved (teacher/learner as well as institution) if they are incorporated and used within the full process of guided learning/assessment.

To its detriment however, change in (higher) education suffers from a suite of often unintended characteristics embodied in educational institutions and professional educational practices. Succinctly, many administrators have expressed this through the use of the image of "herding cats." And as a result, change itself is too often experienced and perceived as an endless confusion of misguided and unnecessary fits and starts in countless directions, subject to the whims of politicians, administrators, boards, professional organizations, and public opinion.

At any given moment if you are outside of your own institution enough you'll find one group or other suddenly grabbing on to the latest "solution" that appears in their field of vision, under pressure from accreditors, in fear of legislative/public pressure, in the name of "accountability." For some it will be the promise of "curriculum mapping"; for others it will be "e-portfolios"; for some it will be "classroom assessment techniques"; others will imagine that the adoption of an "assessment management system" will be their turnkey solution.

However, with so many instances of eventual "buyers remorse," many will soon end up discovering the disjointed, unsatisfying nature of their "solution of choice," faculty will retire, new faculty will be brought in to the legacy of unsated institutional expectations, administrators will advance to new institutions, IR professionals will continue in their daunting task of reporting "student success" rates and related compliance data, and we will have yet to ask effectively (nor answered) the core question "what are we/our students learning, and how do we know?" ...the questions "subjects of their own learning" would want to learn to ask...


(to be continued...)


Barr, Robert and John Tagg. 1995. From Teaching to Learning: A New Paradigm for Education. Change. Nov./Dec.: 13-25.

Batson, Trent. 2008. Digital Arrays for Evidence Based Learning. Campus Technology. Aug. 20. http://campustechnology.com/Articles/2008/08/Digital-Arrays-for-EvidenceBased-Learning.aspx?p=1

Lave, Jean 'Teaching, as learning, in practice', Mind, Culture, and Activity (3)3: 149-164

Smith, M. K. (2003, 2009) 'Communities of practice', the encyclopedia of informal education, www.infed.org/biblio/communities_of_practice.htm.

Tagg, John. 2008. Changing Minds in Higher Education: Students Change, So Why Can’t Colleges?. Planning for Higher Education. 37(1): 15–22. http://www.lib.washington.edu/uwill/Barr_Tagg.pdf


Also, see my previous post from 2008 on this topic:

http://fromteachingtolearning.blogspot.com/2008/08/what-system-should-we-use-for.html

Thursday, August 7, 2008

"What System Should We Use for Assessment?"

I am working on a more complete response to several people who posted in response to my last comments about this question, but let me go out on a limb here and make the following statement:

There is no single turnkey system for managing evidence of student learning. Any system you will find today will require a pre-existing learning community that is willing to back away from familiar notions, and to reconceive the challenge of actual learning assessment-- "assessment as learning." This is not just an ideological statement, but a very practical one. There are many tools and resources that are being promoted today to "do assessment," or to "meet the assessement challenge," but I would say that in 98% of those 'solutions' we are instead endlessly putting "new wine in old skins."

Part of the problem is that many of these tools give us fragments of answers to our desire for learning-assessment, but only fragments. Learning portfolios give us electronic ways to store learning artifacts, for example ("Ah! Tangible evidence of actual student learning!") but like a mute collection of archaeological artifacts, because we have no systematic, contextualized process of gathering this evidence, we end up with electronic drawers and piles of "stuff," maybe with some degree of micro-assessment attached to artifacts. Rubrics give us statements of intended learning outcomes and the standards by which we assess examples of student learning, but then when we gather these with many of the currently existing tools, they are rendered into percentages of achievement, samples of populations, or--worse--back into grades.

One of the things we are struggling with in fact is not even the technology, but the conception of what we are aiming (and very able) to do that needs to pre-exist the technology. The fragmented (and at times technologically consuming) technologies send us off into seemingly endless, and eventually unfulfilling pursuits of software, hardware and related approaches that consume just enough of our time and energy to keep us from stepping back and looking at the significant, big-picture culture change we are actually trying to bring about.

I'll go out on another limb and make another statement that I expect will draw significant skepticism-- especially because we are stuck in habits and perspectives of assessment that rely on conceptions and tools from past centuries. It is as if we were trying to run the space program using the abacus and messengers on horseback. With those tools we wouldn't even be able to conceive of a practical space program let alone carry one out. What we need to be able to conceive of is an approach to assessment that expects we will carry out not sampling of classes or sporadic collection of learning artifacts and evidence, but that we will engage in learning assessment of each student, each time they demonstrate what they have learned.

I am not talking here about the onerous models of micromanaged, NCLB-esque rubricization of the classroom down to the minutes of the day. I am talking instead about an approach to assessment that challenges us to conceive of our institutions in terms of systems and processes, within which assessment engages us in regular self-reflection. And those processes then are facilitated with late 20th, early 21st century technologies that enable us not only to collect artifacts electronically, but in an integrated way, to collect assessments of actual student learning, for each student, for each instance in which learning is demonstrated.

I began to suggest this a while ago in one of my earliest posts on this listserv, and at least one respondent jokingly dismissed it with reference to the possible need for some sort of medication in the mix to make it all come together. That may be a 1960's solution, but again, we need to get up to date with our understanding of the capabilities and capacities of the tools now at our fingertips-- the things that make it possible for instance, for me to bring up in an instant a satellite image of my home, another of my sibling's home hundreds of miles away, and instant mapping directions to make the trip (and how long, approximately it will take, by highways, or avoiding toll roads... with reverse directions, with a rental car..........)

Unfortunately, people in their roles as teachers or administrators, are pressured to "come up with something," and so are pushed in directions of products and services that are big on promise, yet limited on ultimate value eventually returned. Promoters of course-management systems (CMS, LMS) are supposedly integrating "assessment" into their products (some even including the wonders of e-portfolios); other vendors are selling software or services like stand-alone e-portfolios or assessment reporting systems. The deadly combination of marketing and "accountability" pressures will eventually lead many to lock into software and services with one eye on the looming accreditation time-line. And the endless, unfulfilling cycle will continue.

Imagine what would be the "holy grail" of assessment: an approach in which each student would have her/his work assessed each time he/she demonstrated significant learning outcome. This assessment and related evidence would be systematically gathered-- when it happens (at the point of learning/assessment) in a way that the eventually accumulated collection of evidence could be read for an individual student, a course, an assignment, a program, a degree. Imagine being able to bring up a student learning-outcomes transcript, not just a statistical reading of percentages of class achievement; students could use their transcript to self-assess about what they have achieved and what they need further work on. Advisors could use student learning transcripts for next-semester or transfer counseling. Those who manage co-curricular activities or student-workers, could assess demonstrated student learning that happens outside the classroom--for individual students, when and where the assessment/learning takes place. And in this "holy grail" of an approach, when viewing a student assessment, a link to a portfolio would be a click away.

This isn't drug-induced imagining, or pie in the sky. It is just a small glimpse of what we are capable of right now, using the technologies that academia has so carefully neglected--at least in its institutional operations around assessment--and the technologies that others in the world are using in unimaginable ways.


Brian D-L