This article first appeared in Digital Education, the free newsletter for those with a professional interest in educational ICT and Computing. One of the benefits of subscribing – apart from access to unique content – is articles in a timely manner. For example, this article was published in the October ‘14 “Interim” edition.To sign up, please complete the short form on our newsletter page. We use a double opt-in system, and you won’t get spammed.
Crispin Weston, who describes himself as "a controversialist", suggests that our outlook is not progressive at all.
We have a new school year, a new Secretary of State, and before too long, a new government. So this might be a good time to take stock of where we stand with education technology.
The terminology we use has certainly changed. “ICT” is heard less and less, having unravelled itself into “Computing” – which takes care of technology in the curriculum – and “ed-tech” which addresses the use of technology to support teaching and learning. And words matter: defining the conceptual ground on which the public discourse occurs.
We are still in the middle of substantive change on the curriculum front, where everyone is busy developing their programmes of study for the new Computing syllabus and wondering how they are going to teach it.
But when we look at education technology, we find that almost nothing has changed at all. In HE, the hoo-ha about MOOCs (which caused such a stir in HE in 2012-13) is already dying out. In FE, the FELTAG report has come and gone pretty much unnoticed; and in schools, the unpublished ETAG report sits in Minister Nick Boles’ pending tray, where I suspect it will stay until the General Election. Meanwhile, the government continues with its policy of ed-tech laissez faire. Perhaps we should not be surprised when the evidence that ed-tech improves learning is so slight as to be lost in the noise.
Matching this paucity of achievement, also unchanged is the grandiosity of our ed-tech rhetoric, which continues to urge on us a vision of revolutionary transformation. The Education Foundation’s recent Learning Technology Report is a case in point. This insists that “A paradigm shift in the use of technology in education has happened”, based on “clear evidence…on what makes effective schools”. At least, that is the headline claim. Almost in the next sentence, the report admits that “There are still major barriers to the adoption of technology in Britain’s schools”; that one of these barriers is the lack of “a robust evidence base concerning the range of…pedagogies…that make the most difference when used and connected with technology”; and even a lack of “agreement about the extent and depth of [those digital skills that are essential for success]”. What the report initially declares triumphantly to be a paradigm shift that is supposed already to have swept all before it, is in fact a set of eccentric views advocated by the small group of enthusiasts who have produced the report. “THIS NEEDS TO STOP NOW”, the report shouts, apparently referring to the failure of the real world to conform to the plan.
The trouble is that the paradigm shift is not just running behind schedule—it is running in completely the opposite direction. Like Sir Ken Robinson, whose TED talk “Changing Paradigms” has attracted over 12 million views, the sort of ed-tech being proposed by the Education Foundation is predicated on constructivist pedagogies. “ICT plays a critical role and the use of a personal device is essential…in a constructivist environment [where] learners are encouraged to explore ideas and share insights using many different sources” and where they proceed to “construct representations of their understandings with teachers supporting and guiding”. This approach to teaching has less to do with some brave new world of “21st Century skills” as with an oh-so 20th century vision of progressive education. It is a theory which has dominated our schools for forty years, a period in which the performance of our schools has seen a dramatic decline when judged against international comparators. It is a theory which has been questioned by a series of writers, including E D Hirsch, Tom Bennett, Daisy Christodoulou, Robert Peal and Dylan Wiliam—and the only counter argument that I see made online is that “we never believed that stuff in the first place”.
The application of digital technology to the business of teaching could be seen as a way of encapsulating an underlying pedagogy—and the digital part of that double-act is unlikely to be effective unless the underlying educational theory is also sound. It is becoming increasingly clear that constructivism, the theory on which the application of digital technology to education has been predicated, does not work. The rain keeps coming through the ed-tech roof because the pedagogic foundations were bodged.
There comes a point in many painful, creative processes, when your desk is littered with scribblings and plans and failed first lines. Sometimes it is better to throw it all into the rubbish and start again with a clean sheet of paper. If this government has achieved anything in respect of ed-tech, it is the clearing away of the detritus of failed first attempts. Now, I suggest, is the time that we should be taking out that a sheet of paper and starting again, working from first principles to establish how technology is going to transform education.
The more we try and persuade ourselves that the paradigm shift is already happening—or even that we collectively understand exactly how it is going to happen—the longer it will take for true transformation to occur. We should recognise that on the ed-tech surface very little has changed, very little has worked. But press your ear to the ground and you will hear the low rumble of the tectonic plates moving. The paradigm shift that is happening right now, and on which new forms of effective education technology will be founded, is the end of constructivism as the dominant theory of learning. That gives proponents of education technology plenty to think about.
Crispin Weston has spent the last 20 years arguing that better data standards are an essential prerequisite for effective education technology. He has worked on a variety of projects with BESA, the DfE, and US-based LETSI. He challenged alleged irregularities in Becta’s procurement of Learning Platforms in 2007, founded SALTIS, and subsequently worked for Becta on a project to improve the interoperability of learning content. He chairs BSI’s committee for IT in education, representing the UK on formal international standards organisations such as ISO/IEC. Crispin is a controversialist. He regularly challenges orthodox views of the role of technology in education in his blog at www.EdTechNow.net.
This article first appeared in Digital Education, the free newsletter for those with a professional interest in educational ICT and Computing. One of the benefits of subscribing – apart from access to unique content – is articles in a timely manner. For example, this article was published in the early October 2014 edition.To sign up, please complete the short form on our newsletter page. We use a double opt-in system, and you won’t get spammed.
Whether you are moving to a new school, or staying where you are, it’s good to stand back and try to gauge what the school’s education technology is like. Why you would want to do that if taking up a new post is obvious: you want to see how the land lies so that you can start to identify any improvements that could be made.
But why would you want to do that if you’re already well-established in a school? It’s really that things you put into place some years ago may be still in place not because they’re useful, but because they have become a kind of tradition, part of the furniture so to speak. There’s nothing to lose, and much to gain, from carrying out a fresh evaluation at least once a year.
The following criteria are not inspection criteria or anything like that. They are simply a “quick and dirty” checklist of aspects to look at.
If you found this article useful, do check out a longer, and earlier, version of the same sort of thing:
Your newsletter editor is hard at work sifting through the submissions for Digital Education, the free newsletter for education professionals. Have you subscribed yet?
Read more about it, and subscribe, on the Newsletter page of the ICT in Education website.
We use a double opt-in system, and you won’t get spammed.
When it comes to judging students’ work in Computing and related subjects, there are five things that are crucial to take into account.
Unless the point of the work was some sort of drill and practice, it should be an answer to a problem, in my opinion. It may be a pretty simple or “low level” problem, but that’s not important as long as it isn’t low level from the student’s point of view. See the next point also.
Some years ago a Head of ICT showed me the work one of his students had done. He thought it was wonderful, and expected me to share n his delight. Well, this so-called “work” was an animated video which the student had produced by moving a toy figure bit by bit and filming it with a pocket camcorder. Actually, it wasn’t bad at all for a five year old. Unfortunately, the student was twice that age. There was no discernible story line, and no discernible skill, that I’d expect from a student his age. You should know enough about programming and the other aspects of the Computing Programme of Study (IT, Digital Literacy, e-Safety) to be able to judge whether a piece of work is of a standard that you’d expect to see.
A student may have learnt X, but if X isn’t part of the curriculum, then you may have a bit of a problem. After one lesson I’d observed, I asked the teacher why he’d wasted virtually all of the students’ lesson time by having them type data into a spreadsheet, when the lesson was supposed to be on modelling.
“Well”, he said, “I thought it would be good for them to practice their keyboarding skills.”
“Fine”, I replied, “Except that keyboarding skills are not part of the curriculum, and neither was practising them the stated aim of your lesson.”
Maybe X (keyboarding skills or whatever) are absolutely vital. But if X is not part of the Computing curriculum, then why are you teaching it? You may be able to make a good case, because not everything that needs to be taught is in a curriculum of course. All I’m saying is that this sort of thing should be explicit, otherwise you end up teaching stuff, and the students end up learning stuff, without anyone quite knowing why.
Very important this. Even the youngest pupil ought to be able to say how whatever they have done could be improved. If, for example, a child has “told” a programmable toy to go forward a certain distance, but that distance turned out to be too much or too little as far as where they were aiming for was concerned, the child should at least be able to recognise that. Hopefully, they will then be able to start to figure out what to do about it.
That brings me on to a crucial point: how much of the work did the student do individually? We are constantly being told that students must collaborate, because working in a team is a “21st century skill”. But so is being able to work things out on your own, just like most of us do most of the time in our everyday lives. Besides, how do you even begin to work out an individual student’s contribution when the work has been done through a team effort? It ain’t easy.
Read more about it, and subscribe, on the Newsletter page of the ICT in Education website.
We use a double opt-in system, and you won’t get spammed.
Since Michael Gove, England’s then Education Secretary, announced that Levels were not fit for purpose – the purpose being to assess and describe students’ proficiency in National Curriculum subjects – there has been a proliferation of attempts to assess Computing without using Levels. Many of these have taken the approach, quite naturally, of devising a progression grid of some sort. All the ones I’ve seen break the grid down into the Computing Programme of Study’s component parts, viz Computer Science, Digital Literacy, Information Technology and e-Safety. Some, like the Progression Pathways document created by Mark Dorling, go even further. In the case of the Progression Pathways, for example, the categories in the grid are:
This approach has benefits, of course, not least the fact that it is fairly comprehensive (although I think more could have been made of e-safety: it’s there, but perhaps more explicit references would not have come amiss). I don’t wish to understate this: at a time when there was pretty much nothing, Mark created a document that covers the whole of the Computing curriculum, and in such a way that it would enable even the most programming-phobic teacher to at least get past the starting block. (You can find the most recent version in the resources section of the Computing at School website website, along with a document about digital badges. You will need to join Computing at School to gain access.)
Nevertheless, one issue with grids like this is what I call the “equivalence problem”. It’s subtle, but it’s important. In a nutshell, whether or not the items on the same row in the grid are meant to be taken as being on the same “Level”, that’s how they will be interpreted by most people. So the issue is this: are we to infer that items on the same row/Level are equivalent from a “capability/skills/knowledge/understanding” – whichever term you prefer – point of view? If so, is there some underlying intellectual framework upon which this equivalence is based? If it isn’t, the grid gives a misleading impression. It would be better, in that case, to have separate grids for each of the elements, ie separate documents, each of which is independent of the others.
Why is this important? Well, in a pragmatic sense, it probably isn’t. However, I’m interested in not just whether someone understands or can do particular things, but is able to think like a computer scientist. In those terms, I think the whole is greater than the sum of its parts. A computer scientist will approach issues in a particular way. In fact, he or she will see the world in a different way. See, for example, Anna Shipman’s approach to building repairs in the article Applying computational thinking in the “real world”.
So for me, it is important to know if the items on each row are equivalent in some sense. That is because I want to be able to say something like, “Freda hasn’t completely grasped X, but as she ‘gets’ W, Y and Z, I think she thinks like a computer scientist. On the hand, although Joe understands X, Y and Z, he hasn’t grasped W, and therefore absolutely does not think like a computer scientist.” I don’t think you can do that unless you have an underlying framework of reference, and that you can say with near-certainty that items on a particular row of the grid are equivalent.
It seems to me that one way to avoid the issue or least get around it is to adopt the digital badges approach. There, you say that if a student has a particular bundle of skills, they earn a badge that reflects that fact. You don’t have to worry about whether one badge is equal to another, or even what the underlying framework is, or indeed if there even is one. All anyone is bothered about is the question, “Can the student demonstrate that they know or understand or can do this particular set of things?”
A good starting point for the badges approach are the Makewaves curriculum badges and the article about the Progression Pathways digital badges.
I think it’s a pity when people start creating bronze, silver and gold badges, but that’s a discussion for another day!