Some years ago I was approached by Julie Lindsay and Vicki Davis with an invitation to be a "meta-judge" on the Horizon 2008 Project. It was a great honour to be asked, and I hope my judgements are received in a positive way.
But, as usually the case with this sort of thing, it did raise doubts in my mind about the value of rubrics for this type of activity. They are useful, but they are also limited, and not nearly as objective as one might think.
You can read about the Horizon Project by clicking the link given above. In a nutshell, it involved students from several countries collaborating with each other to do research into how modern technology is affecting various aspects of modern life (government, education, health and others). The end product, besides the wiki itself, was a video submitted by each student. These have been judged by a number of educationalists, who decided on the winner in each of the 13 categories. My role as "meta-judge" was to decide which of these 13 finalists was the ultimate winner.
I have to say that this was not an easy task despite having the rubric to guide me. It isn't easy on a human level, if I can put it that way. The trouble with identifying one winner is that by doing so you automatically identify 12 "losers"! I would hope that those 12 don't see it that way. The quality of all the videos was extremely high, and there are even one or two that didn't come out on top that I will have no hesitation in using in my own work (with full credit and citation given, of course). To end up as one of just 13 finalists is good going, and all of the students should feel proud of themselves.
Indeed, even those students not in the final line-up did a fantastic job. Just look at the wiki and you will discover a cornucopia of ideas and resources, almost all of which were put together by the students.
You should also, incidentally, visit the wiki to see how Julie and Vicki organised the project, using such approaches as student project managers. (If you would like an insight into how that worked and what it entailed, from a student's perspective, read the article by Casey Cox in the June 2007 edition of Computers in Classrooms*.)
The rubric I used is called Rubric 1, Multimedia Artifact, and may be found here. As rubrics go, it isn't bad at all. It's shorter than many, which is good, because the longer, ie more detailed, they are, the more easy they are to apply, but the less meaningful they become. The reason is that once you start breaking things down into their component parts, you end up with a tick list of competencies which, taken together, may not mean very much at all. That is because the whole is nearly always greater than the sum of its parts, so even if someone has all of the individual skills required or, as in this case, has carried out all of the tasks required, the end result may still not be very good. So you end up having to use your own judgement about how to grade something, which is exactly what a rubric is meant to avoid in the first place. Let me give you a concrete example.
One of the sentences in the rubric reads:
"Content is constructed from a superficial synthesis of information on the wiki."
That seems straightforward enough, until you come across a case where the content on the wiki page is itself superficial -- in which case the right thing for the student to have done would have been to ignore the wiki page all together and put in some fresh insights. But if they had done that, they wouldn't get credit for using the information on the wiki page. In other words, it's a no-win situation which actually penalises the student who exercises her own judgement.
I think the main problems with rubrics in general can be summarised as follows:
1. Do the individual criteria reflect what it is we are trying to measure? This is (broadly speaking) the problem of validity.
2. Are the criteria "locked down" sufficiently to ensure that the rubric yields consistent results between different students and between different assessors (judges)? This is known as the problem of reliability.
3. Are the criteria too "locked down", which could lead to an incorrect overall assessment being made (the validity problem) or assessors introducing their own interpretations to aid the process of coming to a "correct" conclusion (the reliability problem)?
4. Does the rubric emphasise process at the expense of product? It is often said that in educational ICT, it's the process that's important. Well actually, that is not entirely true, and we do young people a grave disservice if we fail to tell them so. If you don't agree with me, that's fine, but I invite you to consider two scenarios, and reflect which one is the most likely to happen in real life:
Imagine: Your Headteacher or Principal asks you to write a report on whether there is a gender bias in the examination results for your subject, in time for a review meeting next Wednesday. You can't find the information you need, so you write a report on the benefits of blogging instead. You desktop publish it so it looks great, and even burn it onto a CD for good measure. To add the icing on the cake, you even make a 5 minute video introducing the topic in order to get the meeting off to a flying start.
The boss says:
"Wow, that is fantastic. It's not what I asked for at all, but let's face it, it's the process that's important. Let me raise your salary."
The boss says:
"What is this? I asked you to produce a report on gender issues. If you can't follow a simple instruction like that, do you really think you're cut out for this job?"
OK, I know that both responses are slightly far-fetched, but hopefully I've made my point.
Which also leads me on to another thing. I think some of my judgements may have come across as a bit uncompromising. But I really do not see the point of saying something like "Great video", or even "Poor video", without adding enough information for the student to get a good idea of why it was good or poor, and how to improve their work and take it to the next level in the rubric.
Getting back to the issue of interpretation, I am afraid that, in the interests of better accuracy and of giving the students useful feedback, I introduced some of my own criteria. Well, I was the sole meta-judge, a title so grand that I felt it gave me carte blanche to interpret the rubric as I saw fit. Lord Acton was right: absolute power really does corrupt absolutely .
The extra criteria I applied were as follows:
1. Did the medium reflect the message?
To explain what I mean by this, let me give you an example of where it didn't. In one of the videos, the viewer was shown some text which said that businesses can now make predictions. This was then followed by a photograph of chips used in casinos. So, unless the video was intended to convey the idea that predictions can now be made which are subject to pure chance, which I somehow doubt, that was a completely inappropriate message.
2. Could I learn what I needed to know about the topic without having to read the wiki? If not, then I would be at a loss to explain the point of having the video, unless question #3 applied. This includes the question: is the information given actually meaningful? Look at that point about businesses can now make predictions. Businesses have always made predictions, so that statement tells me nothing. What I want to know is, how does communications technology aid forecasting, and does it make the process more accurate?
3. Did the video inspire me to want to find out more, or to do something, even though there wasn't much substance to it? If so, and if that was at least partly the aim, maybe that would be perfectly OK. I'd take some convincing though.
4. Did the video only synthesise the information on the wiki, or did it do more? The word "synthesise" implies adding value in some way: it's more than merely "summarise". But if if the information was of a poor quality, did the student deal with the matter effectively or merely accept the situation?
5. In every case I watched the video first, and then read the wiki, because I wanted to come to it with as few preconceived ideas as possible, to see if the video was able to stand on its own. I then read the wiki and then re-watched the video (sometimes more than once), looking for specific things.
To see the list of all videos submitted, look here (ignore the ‘Ning’ links).
If you have any views on using rubrics, I'd love to hear them -- especially if you completely disagree with anything I've said in this post!
* The Computers in Classrooms newsletter was launched in the year 2000, and is now called Digital Education. It has thousands of subscribers, who say things like:
“Many thanks Terry, for your contributions. I am pointing my students to your site.”
“WOW! I don't know what possessed me to link to your newsletter, Terry, but you have captured, in the sample issue sent to me, what seems to be the "top ten" list of significant educational issues and boiled them down into - "manageable undertakings?!" Don't let the teaching colleges find out about this, or they will run you out of town.”
“I greatly respect the ethics you bring to your work!!!!”
It’s free to subscribe, and there’s more information here: Newsletters.
This is a modified version of an article that was originally published on 9th June 2008.
Here are the most-read articles on the ICT & Computing in Education website.
I've been trawling the archives!
13 reasons to use educational technology in lessons
First published on 3rd March 2011, this post continues to have thousands of views each month.
The importance of mobile phones in education
This was first published in 2010 in the newsletter Computers in Classroom, which is now called Digital Education. It was written by a school student.
25 features of outstanding ICT lessons
Another article published in 2010, this continues to be one of the most popular articles on this website.
24 must-have features of computer labs
I still think there is a place for computer labs, so I took a chance and wrote this article a couple of months ago (at the time of writing), in August 2016. I was very pleasantly surprised to discover that I'm not alone in my views, given the number of times the article has been accessed.
5 mistakes I made when teaching Computing, by William Lau
This guest post by teacher William Lau garnered hundreds of views within the first few hours of being published. It's both incisive and well-researched. That was originally published in the Digital Education newsletter in July 2016, and then republished on this website on 7th October 2016.
I hope you find these articles interesting. As you've seen, a couple of them were published first in the Digital Education newsletter. Another one (about the characteristics of outstanding ICT lessons) has a much longer free download supporting it, available only to subscribers. So, if you'd like to read some great articles before they become more generally available, and fancy some good free stuff too, then subscribe to Digital Education now! It's free. Here's the link: Newsletters.
Thinking out loud...
Here's a curious thing. The basis on which one can best understand modern life is by understanding opposites.
The Way of Life
For example, Lao Tzu, in his Tao Te Ching, said:
"Those who know do not talk, and talkers do not know."
He was referring to spiritual enlightenment, but the same principle applies in other spheres.
The Yin-Yang symbol
Another example: when, several years ago, the Government in England introduced a policy called "Supporting People", in the area of social care, I knew immediately that they intended to reduce the help available by cutting the funding available. I was right. Presumably the support was achieved by making people (the elderly, and people in general who could not live independently) stand on their own two feet.
This is what Stephen Potter, in "Supermanship" (one of his follow-up books from One Upmanship), referred to as "the petrification of the implied opposite".
The "opposites" phenomenon is also acutely observed in Parkinson's Law of reception areas. He notes that you can tell when an organisation is past its peak when they refurbish their reception area. He comes to this conclusion by stating, correctly in my opinion, that when an organisation is thriving nobody has the time to worry about what the reception area looks like.
He wrote his book before the world wide web was invented, but I believe that the same applies to websites. When an organisation or an individual suddenly unveils a sparkling new website, I always think to myself that they can't have much work coming in. I know I make changes every so often, but I don't usually have the time to just get on with it really quickly, because of other pressures (deadlines, for instance).
Introverts and extroverts
In her seminal work on introverts, called Quiet: The Power of Introverts in a World That Can't Stop Talking, Susan Cain makes the point that it is often the quietist people in a meeting or company that come up with good ideas, not necessarily the ones who do all the talking. (I'll be reviewing that book and her follow-up one shortly in these pages.)
Web pages and programs
In ICT and Computing, the simplest programs are often the most elegant and efficient, while simple websites, ie those without unnecessary animations or sound effects, provide the most pleasant user experience.
Therefore I think a good motto for kids to abide by when it comes to such things is "less is more". There is too much of a tendency these days to try and create a Swiss army knife out of every application, that is to say, programs are written that are bursting with features. (I have the sense that this trend is reversing, which would be a good thing.) When I was teaching, I always encouraged my students to write programs that did one or two things really well, rather than loads of things in a mediocre way.
I also tried to led by example. For example, I created a calculator in Visual Basic that was designed specifically to help me work out VAT on my departmental purchases (the school I worked in had to pay that tax). Another program I wrote was a project manager specifically for managing educational ICT projects. Yet another application (using VBA) was designed to work out the rota for manning the help desk each week.
These applications were very successful because I resisted the temptation to keep adding features that would be nice to have, but which would possibly never be used. I can't prove it, but I had (and have) the feeling that when you add features, the complexity of the program increases in a geometrical progression, not an arithmetical one. For example, if you add two features, the program suddenly becomes four times as complex. (I know the relationship is not as precise as that statement implies, but hopefully my meaning is clear.)
Once again, the law of opposites is at work: by attempting to make the program better by adding more features, you potentially make it worse: slower, and with more potential for going wrong.
An earlier version of this article was first published in my newsletter. See the newsletters page for details.
Please note that the links in this article are Amazon affiliate links.