A taste of the real thing

by Fanny Chouc

Heriot-Watt’s interpreting students were given a great opportunity to apply their skills to a real-life setting thanks to Heriot-Watt Engage. They interpreted for the Illuminations event, which was held on campus on Wednesday 02 December to mark the end of the UN Year of Light.

As part of this event, Professor Jim Al-Khalili gave a fascinating talk on the history of optics, looking at all the scientists who contributed to the build up towards our current understanding of Light. Students were given a unique chance to interpret his speech into French, Spanish, German and British Sign Language, working either in booths or in front of the stage.

All students involved have been training as interpreters, but this was, for most, their first experience outside a classroom environment. And what an experience! They provided simultaneous interpreting to a live and e-audience (the event was streamed online), in an auditorium set to welcome 450 people. A particularly daunting prospect for our BSL students, as they were facing a particularly large audience! Students in the booths also took on a challenge for their first taste of professional interpreting: they volunteered knowing that the topic would be challenging, and in some cases, they were working into their B language.

So how beneficial was this first taste of the real things? Student volunteers saw this as a very good reminder of the key skills highlighted in class, with one of them saying: “it reminded me how important it is to stay informed not only in the field of politics and current affairs but also in the field of science”.  They also valued the chance to put their skills to the test in a real, live setting, stressing that “from a learner’s point of view it was very useful to be given the chance to interpret in a professional context in front of a live audience”. And this opportunity also enabled them to make the link between preparation and the actual interpreting process.  But most importantly, they enjoyed this chance to put their skills to the test, with one of them stating that “it was fun and a great opportunity”.

The feedback from the audience was also very positive, especially considering that some of these students only started their simultaneous interpreting training three months ago: they kept going, providing a clear and lively rendition of Prof Al-Khalili’s speech in the target languages, and coming up with clever strategies to convey the sometimes technical explanations of this well-known scientist, delivering a pleasant and efficient version of the speech in the various languages.

In the end, this proved to be a very successful experience for all, and a very good warm-up in preparation for our annual multilingual debates, scheduled for Wednesday 23rd March.

The topics chosen this year are: “This House believes that new technologies are killing real human interactions” (morning debate) and “This House believes that accessing public services in your native language should be a recognized and implemented human right” (afternoon debate). And as last year, it will also be possible to follow the event online and to listen to the interpreters in the booths or watch BSL interpreters at work. Note that the BSL interpreting will be provided for the first time by Heriot-Watt students: the first ever cohort on our M.A. in BSL interpreting has reached their final year and they’ll be joining their peers in our annual events. So save the date, and check this link if you are interested in the live streaming.

 

 

As If We Weren't There

by Jonathan Downie

Neutrality has often been touted as one of the cornerstones of interpreting ethics. The general view seemed to be that interpreters should be so good that the multilingual event would run as if everyone spoke the same language. In other words, it should be as if we weren’t even there.

Now, I have already publicly said that I have serious doubts about using “as if we weren’t there” as a basis for our practice but let’s pretend that it works absolutely fine and let’s simply ask the question: “what does it mean to make the event run as if we weren’t there?”

For many interpreters, the answer will be that, whenever we are faced with ethical issues, we should either do nothing or stay inside our roles as interpreters. If we are asked to hold a baby while a woman has a gynaecological exam, we should say ‘no’ and explain why. If we are asked our opinion by a lawyer, we should decline. If we notice someone being taken advantage of, we should do nothing at all.

The odd thing is that, the more we think about those kinds of dilemmas, the more we realise that doing nothing and standing back is the exact opposite of making it ‘as if we weren’t there.’ For instance, the fact that a witness does not speak the same language as the rest of the court, automatically puts everyone involved in a weaker position than they would be if they all spoke the same language. The jury will find it harder to pick up linguistic cues, the lawyers will find it harder to wrestle the nuances out of responses, the judge will find it harder to assure that the witness is not being badgered and so on. For that reason, if we don’t ask for side benches when necessary, a bilingual court becomes less fair than a monolingual one since not all the necessary information is available to everyone who needs it.

How about mental health interpreting? My colleague, Dr Robyn Dean once shared an ethical scenario presented to sign language interpreters which goes a bit like this.

You are interpreting for a Deaf person who is receiving care from a psychologist. After the meeting, the Deaf person leaves the room and the psychologist turns to you and says, “so what do you think?” What should you do?

The ‘right answer’ given in one handbook was that the interpreters should refuse to comment, since it is not their place or training to pass judgment. Yet, if it is our job to restore things to the way they would be if we weren’t there then refusing to pass on the kind of information that the psychologist would pick up if their patient did not need an interpreter puts both parties at a disadvantage.

Obviously, it is not the place of the interpreter to make clinical judgements on the person’s mental state. There could be a case to be made, however, for the interpreter to pass on the kinds of signals that a trained psychologist could read in a patient who spoke their language. So, it may be useful and relevant to say, ‘his signing space was small’ or ‘he tended to reverse the normal grammatical sentence order’ or, ‘when you asked him about his childhood, his signing became sharper and more intense.’

In this case, the interpreter is not doing the psychologist’s job for them but simply passing on the kind of information they need to do their job effectively. If they don’t, we could easily argue that someone seeing a psychologist with the help of an interpreter would be at a disadvantage compared to someone who didn’t need one.

If these cases seem controversial, it’s only because we are not used to actually thinking about the outcomes of our decisions. We are more used to defending our space as interpreters by telling people what we don’t do than thinking about our responsibility as interpreters and what we should do. We are not used to realising that there are consequences for every decision, especially deciding to do nothing.

In short, if it is our job to make it ‘as if we weren’t there’ then we have to realise that our work would necessarily include addressing the imbalances of power, differences in knowledge, and variations in cultural norms that arise when two people do not share the same language. Doing nothing or declining to act actually makes these differences more pronounced, which would seem to go against what we think we are doing when we try to make it ‘as if we weren’t there.’

I remain to be convinced that trying to do that is a sound basis for ethics. But I am definitely not of the opinion that declining to act is any better. There must be some better basis upon which interpreters can make decisions responsibly, what might that be? Let’s hear your views.

3rd Edinburgh Interpreting Research Summer School!

The 3rd Edinburgh Interpreting Research Summer School (EIRSS) will take place from 22 – 26 June 2015 !

EIRSS 2015 offers intensive research training for existing and future scholars in any field of Interpreting and will include lectures from our Guest Speaker Claudia Monacelli as well as leading Heriot-Watt Speakers, including Professor Ian Mason. It will be relevant to researchers interested in Conference Interpreting as well as Public Service Interpreting, for both spoken and signed languages.

EIRSS 2015 is open to those who are about to embark on a PhD, those in the first stages of doctoral study and those considering a change of direction in their professional career or academic trajectory.

Attendees will have the opportunity to network with world-renowned researchers in the field of Interpreting and will also have the chance to showcase their individual projects and receive feedback.

Please visit the EIRSS 2015 web page for more information about the course and the presenters, as well as details of how to apply.

We look forward to receiving your applications!

Raquel de Pedro Ricoy & Katerina Strani
Department of Languages and Intercultural Studies
School of Management and Languages
Heriot-Watt University
Edinburgh, EH14 4AS, UK

E-mail: eirss@hw.ac.uk

How do you teach note-taking for consecutive interpreting?

It’s one of those ‘how long is a piece of string’ questions. Consecutive interpreting involves listening to a speech delivered in one language in front of an international audience, taking notes and then giving the same speech in another language, making sure it is as close to the original as possible in terms of content, delivery and style. The activity is taught and practised through memory exercises, listening comprehension, summarising, abstracting and note-taking.

There is some very useful literature on note-taking for consecutive interpreting aimed both at trainee interpreters and at interpreter trainers. The most frequently cited works are Rozan, J.F. (1956) Note-taking in Consecutive Interpreting; Jones, R. (2002): Conference interpreting explained; Gillies, A. (2005): Note-taking for consecutive interpreting. A review of these key works by Michelle Hof can be found here.

Even though note-taking constitutes an integral part of the interpreting process, it may detract interpreters from active listening. This means that the note-taking task involves filtering and ruthless selection, as well as translation, so that the speech can be then delivered in another language. Because of the bilingual nature of the task, shorthand would not be effective in helping to reproduce the original speech verbatim and thus eschew the process of filtering, as shorthand is based on standardised symbols of sounds, not meaning (Valencia, 2013: 11-12).

More importantly, the role of interpreters’ notes should be to “relieve memory” (Jones, 2002: 42) and to outsource tasks that cannot be performed by memory alone. In other words, notes should be an aide-memoir, not a schematic representation of the entirety of the speech. Because of the mutual dependence of memory and notes and the highly contingent nature of memory, notes are highly personalised to the extent that “no two interpreters will ever produce an identical set of notes” (Gillies, 2005: 10) for the same speech. At the same time, the majority of speeches tend to be formulaic to the extent that they “present the interpreter with a limited range of the same problems, for which effective solutions have already been worked out and are applied by many, many interpreters” (ibid.). This means that despite the contingent and subjective nature of notes, there exist basic principles of note-taking in consecutive interpreting that can be taught (Valencia, 2013: 14).

Despite this, there is no one-size-fits-all note-taking system, which poses a particular challenge for learning and teaching. The basic principles mentioned abover are supposed to become “internalised” (Gillies, 2005: 10) and ultimately individualised to follow a personal style as well as the requirements of any given speech, speaker or setting. This is easier said than done.

The current learning experience involves teaching students some basic note-taking symbols and abbreviations of terms that occur in most speeches, as well as strategies in noting down numbers, links, tense and how to separate ideas. Learners practise interpreting speeches based on no notes, minimal notes, only symbols, only numbers etc. They are also encouraged to share their notes to see examples of different note-taking styles and even to try to reproduce the original speech based on other people’s notes. However, they do not get an insight into how different styles of notes are produced – how quickly the interpreter takes notes, how much of a time lag there is in producing these notes, how selection of information takes place, which language is chosen for note-taking etc. Class time is too limited for carrying out these activities and for helping learners develop the creativity required to assimilate the techniques taught and make them their own.

Maybe uploading pre-recorded videos of real-time note-taking on a virtual learning environment such as Blackboard would be useful for learner practice. The videos would not be prescriptive, but they are meant to trigger reflection and generate ideas. It would save class time and create the space necessary for students to be creative, experiment and develop a personal note-taking style. It would also offer an insight into the professional world by demonstrating different types of real-time note-taking. The opportunity for reflection is important, as students can go back and deconstruct the process while exploring and developing their own efficient system. In this way, they are encouraged to be “active makers and shapers of their own learning” (JISC, 2009: 51).

It takes months, even years of experience and practice for interpreters to develop their own efficient, tried and tested system of note-taking for consecutive interpreting. Pre-recorded note-taking videos may enhance the learning experience through experiential and authentic learning that helps to demonstrate how memory and note-taking work together in producing a semantically accurate and fluent speech in the target language. It would be useful as a follow-up for learners to upload videos of their own note-taking and share with their colleagues their own reflective process, justify their selection choices, symbols, techniques etc. A wiki for sharing ideas and practice material could then be developed.   Class time and setting are simply too limited for such a task.

Interpreting Needs Troublemakers

Author: Jonathan Downie

I was in London on Saturday for a meeting and I got chatting to some fellow interpreters about the ways that research is challenging how we think about and practise interpreting. Here in LINCS, for example, Robyn Dean is arguing for us to fundamentally shift how we think about ethics, Penny Karanasiou is asking tough questions about the role(s) of interpreters in business negotiations and I am beginning to think that experienced clients might have more helpful views of our work than we do!

All this spells trouble. Doing research like this means threatening some of the most cherished ideas of our profession. Who doesn’t like to coddle the comforting thought that we know better than our clients about, well, everything? If you start talking too openly about problems with mainstream interpreting ethics, you remove one of the few firm foundations in our profession. And as for discussing whether interpreters can do more than “just interpret”, it’s probably safer to just leave that well alone!

But the thing is, all the good researchers I know are very bad are just leaving things alone. Safe is not a word we tend to like. In fact, I was accused of enjoying stirring things up on Saturday. Me? As if!

All joking aside, I do really think that challenging preconceived ideas is exactly what our profession needs. If we discover flaws in our practice or training or in the way we sell our work then of course, it must be confronted. This is where research is at its best. When researchers get their hands dirty and ask difficult questions, sparks begin to fly.

Take Robyn’s work in interpreter training. Rather than just sit back and criticise, she actively trains interpreters to apply the case conferencing techniques used by doctors. I know of many other researchers who do groundbreaking research and then take the brave step of presenting it to professionals so they can apply it.

If interpreting is to thrive in today’s high-tech, always-on world, we need to be able to adjust. This doesn’t just mean adopting some new technology or learning to be fashionable. It means asking the though questions about what we need to change in our practice to meet our clients’ real needs and growing expectations.

Is it scary? Yes! Is it necessary? You bet. But that’s why I do research: to do work that can benefit the wider world. Maybe it’s time we all did the same.

Back to School ?

by Katerina Strani

The new Academic Year has started and LINCS is full of students again. It’s good to see enthusiastic freshers, new MSc and PhD students as well as old familiar faces.

But even though undergraduate students get a break from uni during the summer, staff and postgraduate students are busier than ever. So what did we do over the summer?

  • Held the annual Edinburgh Interpreting Research Summer School (30 June – 4 July): Intensive research training for existing and future scholars in any field of interpreting. 5 days of seminars on research design and methods, lectures on current trends in conference, public service and sign-language interpreting, workshops on writing a literature review to maximising research impact, presentations by participants. Oh, and guest lectures by Barbara Moser-Mercer and Franz Pöchhacker.
  • Held the annual Applied English and Interpreting Summer Course (4-22 August): Intensive interpreting training (CPD) for professional interpreters. One week of British Culture and Society, British and Scots Law and public speaking, two weeks of intensive consecutive and simultaneous interpreting into English, including multilingual mini-conferences.
  • Ran Academic English Programmes to enable students to reach the required entry levels for English language and to prepare to study in a UK context. 450 students attended 12, 6 and 3-week courses with an overall pass rate of 98% ! These courses use Access EAP: Frameworks, co-authored by Olwyn Alexander, Academic Director of the English section and nominated for an ELTon award in 2014. The pre-sessional courses are accredited by BALEAP and were inspected for re-accreditation in August. Innovations this year include a strand of subject-specific seminars to enable Business Management students to prepare to engage with postgraduate study. There were also a series of Open Days within Academic Schools to welcome new students to the university.

We’ve also been busy with Public Engagement activities, such as:

  • BSL summer school for school kids, voted as the No.1 school experience day for kids this year! For more information, contact Gary Quinn.

Finally, we secured funding for three collaborative research projects:

1. Dr Raquel de Pedro Ricoy secured AHRC Research Innovation Grant funding under the Translating Cultures theme. The project, entitled “Translating cultures and the legislated mediation of indigenous rights in Peru”, to be conducted over 20 months (October 2014 – June 2016), has been awarded over £200,000. The aim of this project is to examine translation and interpreting processes between Spanish and indigenous languages in contexts of consultation between agents of the state, outside bodies and members of the indigenous communities against the background of escalating industrial exploitation of the natural resources lying below indigenous lands. The research team includes Professor Rosaleen Howard (Chair of Hispanic Studies, Newcastle University) and Dr Luis Andrade (Pontificia Universidad Católica del Perú, Lima), and will work with Peru’s Ministry of Culture and the NGO Servicios Educativos Rurales as Project Partners.

2. Professor Jemina Napier also secured AHRC Research Innovation Grant funding under the Translating Cultures theme for a project entitled “Translating the Deaf Self”. The project will be conducted over 18 months (January 2015- June 2016) and has been awarded over £198,000. Its aims are to investigate translation as constitutive of culture and as pertinent to the well being of Deaf people who sign and rely on mediated communication to be understood and participate in the majority. The deaf-hearing sign bilingual research team, co-led by Professor Napier and Professor Alys Young (Professor of Social Work Education & Research at the University of Manchester) will include deaf researcher Rosemary Oram and another deaf research assistant, and will work with Action Deafness in Leicester as Project Partner.

3. Dr Katerina Strani secured funding by the European Commission Directorate General for Justice for a project entitled “RADAR: Regulating Anti-Discrimination and Anti-Racism”. The project involves 9 partners, it will be conducted over 24 months (November 2014 – October 2016) and Heriot-Watt has been awarded over £33,000. The  aim is to provide law enforcement officials and legal professionals with the necessary tools to facilitate the identification of “racially motivated” hate communication. For this purpose, a communication-based training model will be developed for professionals at the national level and for trainers at the international level, as well as online learning resources. Finally, the project aims at producing a multilingual publication with concrete tools, recommendations and best practice examples to facilitate anti-discrimination and anti-racist actions and regulations.

So after a busy summer, it looks like we have an even busier year ahead.

Bring it on!

Whose Job is it to make you a translator?

It’s a common complaint. A number of students graduate from translation and interpreting courses only to find, to their horror, that their courses have prepared them for the technical and linguistic aspects of translation and interpreting but have not assured their career success. Outside of the feathered nest of a university program, they find, to their horror that clients are not clamouring to work them and (shock!) they must find ways to get clients themselves.

It is very easy to blame the universities for this. It might seem perfectly reasonable for students to think that, if they are paying for a translation degree, that their degree will make them translators. It will not.

The truth is that, even in four year degrees, there simply isn’t time to give students all the skills they will need to establish their career in translation or interpreting. Besides language skills, research ability and flexibility, freelancers need to understand and use marketing, negotiation, pricing, accounting, networking, presentation skills, writing, and much more besides. Many of these will even be used differently in different sectors of the same industry.

It’s is unfair to expect students to emerge from any degree as a complete freelancer, ready to face the world. The reality is that they have much more learning to do, even after getting their first job or first project.

This, of course, does not entirely exonerate universities from any responsibilities. There are good reasons why students should expect that their degrees will at least introduce them to market realities and that their course will have some sort of connection to the world they will enter when they graduate.

Hence why Heriot-Watt University, like many in the UK, is pleased to hold (in partnership with ITI) Starting Work as a Translator or Interpreter events every year for final year and masters students. At such events, students can get vital introductions to freelancing, and even staff work. Rather than filling in gaps that “should” be in the degree, such events show that it is possible for academia and the market to cooperate in making sure that students are ready for their next stage of learning.

The key to all this is partnership. In most countries, even the biggest professional associations have neither the time nor the expertise to create the infrastructure for providing full training for hundreds of students every year. Universities do. They also find it much easier to accept the inevitable fact that not all students trained as translators or interpreters will ever find their way into these professions.

On the other hand, universities, due to resource restrictions, are not able to provide the kind of career-long support to professionals that their associations are increasingly offering. In fact, such support is, quite correctly, normally not within their remit.

The point is that no one becomes a translator or interpreter simply by getting a degree. It takes time, perseverance and, crucially, a decision to take part in your local (or not so local) professional community. All of this takes places as students and new professionals learn to apply their university training to real-life realities and to make decisions on further training. We are trained in the classroom but become professionals at the wordface.

Vow of Silence: One week later

(After a week of self-imposed silence, acknowledging the British Deaf Association’s Sign Language week, Professor Graham Turner reflects on a week in a signing world.)

I don’t remember ever being described as ‘Christ-like’ before.

There was a considered and thoughtful explanation. But the starting-point for the person’s comment was a reference to the ‘sacrifice’ that I was making by choosing not to speak for a week.

Which, of course – if you think about it for just a moment – leads inevitably to reflecting on what British Sign Language users experience every day in their encounters with the hearing world. It’s obvious that if I’m ‘making a sacrifice’ by not using speech, it’s considered desirable to speak.

What happens if you don’t?

Well, here’s what happened to me. It’s a kind of insight into what Deaf people routinely face.

People immediately started treating me as if I were invisible. Their logic was, if he can’t speak, then he can’t hear, so he’s irrelevant. Implication? Ouch.

I couldn’t do the everyday things hearing people do just to show that they’re friendly and human. Getting off the bus, I couldn’t thank the driver. When a delivery arrived, I couldn’t pass the time of day with the courier. These things don’t seem to change the world – but they do. There is such a thing as a society. It’s built on these little moments.

At work, too, it’s amazing how much of the important stuff happens in the corridors and the staff kitchen. That quiet word in the Head of Department’s ear. That useful nudge about a forthcoming conference. The deadline for a research funding opportunity.

I published research referring to this very topic over a decade ago.  It was still salutary to get a direct sense of its impact.

I had to rely on colleagues’ good-will to interpret for me once or twice. They knew the score and didn’t mind. But supposing this happened every week? What would that do for our relationship – if I were making frequent withdrawals from their bank of generosity? How quickly would they start seeing me as needy and irritating?

Even with little snippets of interpreting, it helped to take a moment to brief the interpreter-colleague on what I was trying to convey. Over the course of a week, those ‘moments’ added up. If I’d had hour-long lectures to deliver, that preparation time would have increased hugely. Where would I have found the time for this, whilst keeping all the other plates spinning?

In meetings, I tried writing notes for others to read out on my behalf. With my comments in front of them, and me listening, even people I knew still sometimes revised my words. With the best will in the world, my input was being distorted.

Sometimes, I couldn’t get my comments in before the meeting agenda had moved on. So I had a choice. Swallow my contribution and look like the guy who has nothing useful to offer? Or annoy everyone by bringing them back to an issue they’d finished with just to hear what I had to say?

My Deaf colleagues are able to pay for interpreters when required (with funding from the Access to Work scheme). It has transformed the workplace for many BSL users. Hearing signers can’t opt into the scheme. I’d love to maintain my ‘vow of silence’ indefinitely. Without the resource to be interpreted when necessary, it just wouldn’t be possible.

But for Deaf people, this funding – always tightly rationed – is being reduced and new demands imposed by the Department for Work & Pensions. The repercussions are catastrophic. An Early Day Motion has been created seeking a re-think.

Especially after this week, I’d urge anyone to write to their MP and ask for their signature on the Motion. It matters.

I was also reminded that the current qualification system for BSL (levels 1-6) doesn’t push signing skills to the very highest levels of fluency! Knives and forks were definitely not invented by signers. But Deaf people become adept at maintaining signed conversation despite such obstacles. That’s level 7 signing.

Driving a car means that both your hands AND your eyes are otherwise occupied. So Deaf cars lack chat? Not a bit of it. Level 8.

So I’ve made it to Friday. What have I learned? Mostly, what a lot I still have to learn.

I’m profoundly hearing, and I always will be. I can’t inhabit a Deaf person’s life, no matter what. But this week has made me reflect, and see some of these things from a different angle.

How about you?

I’m confident any hearing person would learn from the experience. Don’t do it for my sake. Do it for the person who wrote to me midweek: “I am the mother of three kids, two hearing and one Deaf. Thank you. Your vow of silence means a lot to me.”

And please tell others about it. Tell us by replying to this blog. And watch this space for our plans to make further progress on the issues.

Thanks for listening.

Author: Graham Turner

 

Vow of Silence: Day 4

Having committed to a week of silence to demonstrate solidarity with the UK’s Deaf sign language users, Professor Graham Turner has made it to Thursday without a squeak. Will everyone else’s luck run out before the weekend?

Imagine you’re completely blind. Can you do that? It’s not too difficult: you start by closing your eyes…

Now imagine you’re stone deaf. Not just a wee bit fuzzy round the edges, like your granddad or when you come out of a loud gig. Deaf as a post.

You can’t, can you? We don’t have ear-lids. You can’t switch your hearing off, no matter how hard you try.

This is at the root of the hearing world’s inability to comprehend what Deaf people are on about. Three key things follow from being Deaf.

One, everything the hearing world takes for granted about receive incoming information from the world through hearing, doesn’t apply. I’m on a train. The tannoy says the café closes in five minutes. If I’m Deaf, it could be a long, stomach-rumbling journey to Edinburgh.

Two, fortunately, the eye is a fantastic device. Persuasive evidence shows that Deaf people’s eyes are sharper and wired more responsively to their brains than hearing people’s. The way Deaf people do ‘being alive’ is re-jigged from top to bottom to exploit their different biological make-up.

Notice: not ‘deficient’ – just DIFFERENT.

Three, the kind of language that perfectly suits the bodies of Deaf people is signed language. British Sign Language has evolved naturally over centuries to match Deaf capabilities. Just as spoken languages work for the hearing, signing is perfectly designed to exploit the visual nature of Deaf people.

My ‘vow of silence’ hasn’t turned me into a Deaf person. If I had a heart attack right now, I can confidently predict that I wouldn’t wait for an interpreter to show up before communicating with the paramedics. I’d speak. (And I can’t NOT know that the café has now closed. Fear not: I brought my own biscuits.)

But as I can sign, and I’ve taken the time to learn from Deaf people what their experiences are like, I can get that much closer than most to seeing the world from a Deaf perspective. Our languages powerfully influence the way we think. Language both shapes and reflects our identities. I’m not Deaf, but – bearing in mind that it’s taken me over 25 years to develop my understanding – I do begin to ‘get’ what it means to be Deaf.

What about that heart attack scenario I just envisaged, though? The hearing world has often treated Deaf people as being in need of medical treatment. The urge to ‘fix’ those different ears runs deep… Deaf people say – SHOUT – “Leave us alone! We’re perfectly OK! We don’t need to be cured!”

But when a Deaf person suffers a heart attack, the real nightmares begin. The British Deaf Association’s discussion paper, launched yesterday, reports again  on the life-threatening barriers BSL users face when they actually do need healthcare.

However, it being the 21st century, new ways are being found to bridge this communication gap with Deaf people. In Scotland, NHS24 has piloted the use of video technology to bring ‘remote interpreters’ into the frame. It can work, but of course it depends upon a supply of competent interpreters.

They’ve thought of that, too.

In a UK ‘first’, NHS24 is seconding a group of its staff to Heriot-Watt University’s BSL interpreting degree. That’s a commendable commitment on the part of the service. Investing in four years’ full-time training per student underscores a really serious response to the problem.

And it shows that they know it’s THEIR responsibility to make healthcare properly accessible to BSL users.

That perfectly illustrates what we need to see across the board. Public services – health, education, social services, the legal system – facing their lawful obligation to ENSURE their own accessibility.

Not just by hoping for the best, but by nurturing skilled professional interpreters. And, when it makes sense to use limited resources in this way, to provide frontline practitioners who can sign, fluently and directly, with Deaf citizens.

It’s not a pipedream. It’s a perfectly achievable goal, as other countries have already shown. It just means paying attention to informed advice, especially from the BDA, which represents BSL users nationwide. And then, when you say you will treat Deaf people fairly, it means putting your money where your mouth is.

Now that’s what I call using your imagination.

Author: Graham Turner

Machine Translation will not take your job, honest!

It’s a common theme. In [5, 10, 20] years, machine translation (MT) will be so good that there will be no human translators left. And, indeed, there are some trends that make this idea look tempting. The move towards statistical machine translation has allowed machines to learn from the texts they are given, allowing them to process at higher levels and produce more convincing results. But this won’t mean that they will replace humans, let’s see why.

The first reason that human translators will still have is that human language is slippery. Even if you were to compile a massive database (or “corpus”, to give it its technical name) of all the language used everywhere on the internet today, it would be out of date within 24 hours.

Why? Because as humans we love to play with, subvert and even break our own linguistic rules. Even people who hate languages love to make up new words and repurpose old ones. The biggest corpus in the world can only tell you how people used language yesterday, not how they are using it today and definitely not how they will use it tomorrow.

The basis of Statistical machine translation is that the way language has been used on previous occasions is a good guide as to how it should be used this time. Hence why Google Translate famously translated “le président des Etats-Unis” [the president of the United States] as “George W. Bush” months after President Obama was elected. The logic behind this decision is that if “George W. Bush” was used in that space enough times, it must mean that that phrase can be used all the time – a mistake that no human good human translator would ever make!

Add to this the fact that meanings of words change (something that has been mentioned elsewhere on this blog) and things look much worse for MT. It gets worse though, since language is bound so tightly to culture, “literal” translations are often incredibly misleading.

Here is a really simple example. In English, we have a set number of phrases we use to sign off a formal letter. We might use “Yours sincerely” or “Yours faithfully” or maybe “Kind regards”. In French, formal letter sign-offs are much longer and one of them might literally be translated as “Waiting for your response, I ask you to accept, Sir, the expression of my distinguished salutations”.

Now, statistical machine translation experts will rightly tell you that a good, trained package would not translate this literally but would look for an English equivalent. The problem is that the English “equivalent” would be different for different contexts and would involve looking much wider than MT normally looks. The decision here is linked to the context of the letter (specifically whether or not you know the name of the person you are sending it to) and not to language considerations themselves.

There are lots of translation decisions that are context-based like this one and it is in these kinds of decisions that MT will always flail around helplessly. It is in these kinds of context-based decisions that good human translators will always triumph.

So where might the future lead? Well, just as human translators are becoming more specialised, so will MT engines. Research presented at the recent IPCITI conference showed that there are ways that MT and precisely, post-edited MT can work. Perhaps one area where MT will work is in specialised fields, which use consistent language. Another view is that human translators will be called upon to make more use of their knowledge of the world, which adds justification to universities like Heriot-Watt who train their students in areas like international organisations and research skills alongside their technical training in translation and interpreting.

The future is bright, but the future certainly isn’t Machine Translation taking over completely from humans.