Court Interpreting: The Quest for Data

By now, no one even seems surprised at another case of poor court interpreting in England and Wales. After three government enquiries, numerous newspaper articles and judgment after judgment, it is actually becoming hard for the flaws in the new system to gain any headlines at all. After all, it is only so often that similar stories can be reprinted. Short of a massive miscarriage of justice or a judicial ruling, it seems that nothing will prevent the current contract running its course.

While many legal professionals have openly criticised the new agreement, it seems that the real decision-makers remain convinced that the new arrangement will, in time, deliver savings. It might be tempting to say that, with the evidence available, the only realistic view is that the new system has failed but there remains one problem with this argument. For the moment at least, the data on successful interpreter call-outs, quality and even cost, remains in the sole hands of the contract provider and, occasionally, in the hands of government departments or enquiries. From the point of view of objectivity, this is disappointing, to say the least.

Perhaps this is where a little amateur research could come in handy. While the meaning of “interpreting quality” could and probably should be a matter for debate, whether the interpreter booked for a case arrives at the court ready for work is not. It wouldn’t take too much effort for a UK city to be selected and for people to station themselves at various courts, watching the various cases that go on. Records could be kept to show whether the interpreters booked for a case showed up and notes could be taken on how they worked.

What use would this data have? Well, for one, it would move the debate from being a battle of individual stories, to one where independent figures could be brought to the fore. Such a study, on whatever scale, would give a clearer picture of how the new arrangement has actually affected the everyday running of the justice system.

Secondly, such data might help to move discussions away from highly charged debates about the rights (or otherwise) of those who do not speak sufficient English to play a full part in court proceedings. The benefits of this are clear: once people realise that supply bad interpreting (or none at all!) costs more than having a good system in place, they are more likely to give support to a fairer deal for the interpreting profession and the justice system. Simple maths would tell us that it is cheaper to have one hearing with a good interpreter than two with poor or absent ones.

The drawback of this data-driven approach is that it would take even greater coordination than has been seen before. People would need to voluntarily give up their time to decide on what method to use and learn how to apply it consistently, before giving up even more time watching court proceedings. Even after all that, more people would need to give up their time to collate the results and present them.

Perhaps this is why research in an academic context can be so expensive. Getting things right takes time and effort. Yet the cost of not doing research at all or doing it in a slapdash way can often be so much higher.