 Good morning. Good morning, ladies and gentlemen. Are you getting ready? Can you hear me? Good morning. You're very welcome on this second day of our conference and for the second keynote speaker, Professor Sharon O'Brien. I'm not going to spend a lot of time introducing her. I'll just say that we're really pleased to to have her here as a keynote speaker because she really represents a vibrant part of our profession and our academic field. Sharon comes from Dublin and she works at the Dublin City University in the School of Applied Language and Intercultural Studies. She's also director of the Center for Translation and Textual Studies and I think you all know a very good book on research methodologies in translation studies that she has co-authored with Gabriella Saldana. I know that Sharon is also working on a book called Interdisciplinarity in Translation and Interpreting Process Research which is perhaps an even more vibrant research area than the ones that we've heard about so far. So I'm very pleased to give the floor to Sharon. Can everybody hear me? Yes, okay good. So we have a new microphone this morning. I think it's maybe a little bit easier than yesterday to hear. So first of all I'd like to start by thanking the committee who asked me to be one of the keynote speakers at this prestigious conference. It is truly a great honor to be here and to address you all this morning. I was asked in the invitation to be a keynote speaker to talk about translation technology because that's an area that I work in quite a lot. But also to touch on the theme of the conference which is moving boundaries. So in the next 40 or 45 minutes this is exactly what I'm going to do I hope. So how many of you recognize the picture on the screen? Few people? Great, a lot actually. So these are dongles which are hardware keys as you know for software programs. So like our USB sticks these days where we stick them into our computers we used to stick the dongles into our computers and it would provide us with the license for the translation memory tool. So this picture represents quite an innovation in translation technology and quite a shift I think in translation process and in the translation profession which I would like to talk about in a few minutes. But first I also want to share with you the fact that these dongles also personally represent a major headache for me because when they were introduced in the beginning I was working in a language service provider and my job was to introduce translation memory tools to highly resistant translators and to document the processes in which these translation memory tools would be used. But also to post by snail mail the dongles around the world to the freelance translators who would use them. So that was quite a headache but it was also a headache when they wouldn't work and I had to troubleshoot why would the dongles not work and even more difficult was getting the dongles back from the resistant freelance translators who didn't want to use them in the beginning. So it's an interesting image but it's an image of positive and negative I think. In what way do these dongles represent a shift in translation technology and in the process of translation? Well first of all in the translation profession with high volume and repetitive translation translators no longer had to use something like Microsoft Word compare feature to compare one old source text against a new source text and cut and paste the translation that they were working on already into the new source text. So the translation memory tool would do that for them. So a slight change in their process. Translators also now had to deal with different kinds of source text to a large extent. So they had the source text, the old source text, the new source text and then they had the match from the translation memory that they also had to look at. So now in terms of process they weren't just translating but they were also doing cross language evaluation and comparison, acceptability decision-making based on what the translation memory tool suggested to them and also editing in the translation process itself. Some people have argued also that the introduction of translation memory tools forced translators to focus more on segments of text rather than text as a whole. So that has had an impact on the process too. In terms of the profession then computer aided translation also has had an impact. You could say that translation became more collaborative as a result of TM tools or perhaps more derivative because translators were now using other translators work on a daily basis. It also meant that translators no longer had to be experts in content management programs like Framemaker, which is quite difficult to master, that they didn't necessarily have to understand SGML or HTML. But on the other hand, they had to become experts in translation memory tools and they had to become experts in handling tags, which is still an issue for many of the translators these days. The price or cost of translation was also affected by the introduction of these tools. So there was a significant downward pressure on the price of translation and at the same time a significant increase in expectations regarding productivity, throughput and also on quality. So it was expected that translations would now be higher quality, at least more consistent thanks to translation memory tools. So they had, I would say, quite a significant impact on on the process and the profession. Now when we talk about translation technology, sometimes people say to me, well, you're only talking about the small translation. So going back to Andrew Chesterman's talk yesterday, what do we include in translation? And translation technology is sometimes seen as only small translation, so not that large concept of translation. But at the same time, we know that many, many translators use these tools and have to engage with them. So it's not an insignificant portion of the translation profession. In fact, in a recent survey by one of the language tool developers and language service providers, where they had almost 3000 responses across 115 countries, 83% reported using productivity software, which is a euphemism for translation memory tools. 68% of these were freelancers. And other surveys that have been carried out on translation technology usage demonstrate that a lot of translators have to engage with this technology. So after the initial introduction of computer aid or translation tools and the shock to the system, we could probably say that the aftershocks have now ceased, and that these are firmly embedded in a large portion of the translation profession. Yet there's a lot of research quite recently even that suggests that translators are not altogether happy with their translation technology. There's also some evidence to suggest that the developers of these tools have actually started to listen to the fact that translators are not always happy about the tools that they have to use. That's something I'd like to come back to a little bit later. But sticking with major shifts in translation technology, I would say that another shift has to do with this concept of data. So with the production of translation memories over a significant period of time, something very valuable was produced for computational linguists. So that is big translation data. Suddenly after years of struggling to some extent with rules based machine translation, translation memory data allowed computational linguists to apply probability formula and statistical machine translation or the data driven paradigm emerged. At first this idea of generating language through numbers was not received very, very warmly, as you can well imagine. But it quickly became clear that statistical machine translation could improve quality over rules based machine translation in general, at least for some of the language pairs, for some domains and for some types of text. So after years of happily ignoring the elephant in the room that was rules based machine translation, machine translation suddenly became something that linguists translators had to pay attention to. I would say that nowadays machine translation is not necessarily the elephant in the room. Many of our professional translators and our translation students see machine translation. It's very visible to them. They know about it. They know how to use it and they interact with it. Potentially machine translation could be seen as just another tool in the translator's toolbox, along with their dictionaries, their online glossaries, translation memories, parallel corpora and the web as a corpus. So machine translation looms large, whether we like it or not. And for some, for all, since it has been introduced in a more usable format, let's say with statistical machine translation, we as educators, researchers and in the profession have tried to differentiate machine translation and the revision of machine translation called post editing from everything else that is translation. So I mean translation from scratch or pure human translation, revision of human translation or revision of translation memory matches are all put into one box over here and machine translation post editing is put into the other box over here because it's argued that that is different from translation and it's different from revision of human translation. But I'd argue that the boundaries are becoming very blurred now between these two things. So let's think about why that might be the case. Machine translation systems are built using human generated translation memory data. The post edited machine translation is now added to the translation memory. Machine translation is integrated now into all of the state of the art computer aided translation tools. Sometimes the translator can choose between a fuzzy translation memory match and the machine translation generated sentence. And sometimes the MT is actually better than the fuzzy match that's available. And your fuzzy matches can even be repaired to become exact matches using sub segment machine translation. So the two things of post editing and everything else that's done with machine translation have started to really merge together. But getting back to the elephant in the room. Statistical machine translation has encountered a problem. We have techniques like cleaning the translation memory data to make it cleaner, more consistent, tuning the machine translation memory data to make it more specific domains and automatic post editing rules that make machine translation quality better. But at the same time, just as the elephant in this room has hit the ceiling, statistical machine translation, it's argued, has also hit a quality ceiling. And it's very difficult for it to break through that quality ceiling. Especially for morphologically rich languages such as Hungarian or Finnish, for languages for which there is not so much data, so not so much translation memory data to be made use of. And for translation directions that are less common, I want to say. For example, if somebody could correct me on this, from Greek to Japanese. So this would be quite difficult for a machine translation system. So the time is ripe perhaps for a new shift in the machine translation paradigm, which will perhaps have an impact on us in the translation profession. And one suggestion is that neural machine translation or NMT for short will be the new paradigm. So when you think about neural, obviously you think about the brain. And if you think about a neural network, you might visualize something like this. So this is a nice picture potentially of a neural network with lots of connections, synapses and neurons firing, which hopefully yours are at this point in time too. But if a computer scientist talks about a neural network, they're more likely to show you a picture like this. So what they're talking about here are artificial neural networks that are modeled on the biological neural networks that we have in the brain. Now I want to say very clearly I'm not an expert in neural networks and I'm not an expert in neural machine translation. In fact, I would say that there are very few experts yet in neural machine translation, but there are a lot of people vying to become the experts in that area. But what I can tell you is that neural machine translation is based on the concept of deep learning or machine learning, where a computer automatically learns from data. So for machine translation, that means it can create a predictive model of translation for new source material that it has learned from the data that it has already seen. It's basically about establishing links between items or concepts or words, and over time those links either strengthen or become weaker depending on the data that it sees. Now why do we have this shift towards neural machine translation at this point in time? Because as I've already said, statistical machine translation seems to have hit some sort of quality ceiling. The early indications are that neural machine translation produces better quality. Also, while artificial neural networks are not new in computer science, apparently we now have the right processing power to make use of these artificial neural networks. So could neural machine translation be better than statistical machine translation? And why? One of the main differences is that statistical machine translation is largely word or phrase based, and so they're treated as independent units. This means that translation can be accurate in one context, but inaccurate in another context. It also means, and some of you might have seen this, if you've used machine translation based on a statistical model, part of the sentence might be very fluent and the other part of the sentence might be very disfluent. On the other hand, neural machine translation can apparently take lexical preferences from neighboring sentences and neighboring text, and in so doing it can produce more fluent sanding and contextually correct translation, thereby improving the overall text cohesion, which is something that machine translation has been quite bad at so far. Now one of the big issues for neural machine translation is that it requires a lot more computational processing power than statistical machine translation, and as I've said, we apparently have that kind of power these days to start realistically thinking about neural machine translation. But it's still slower at the moment than statistical machine translation. It's very early days for the NMT paradigm, and many people are calling it hype in the same way that they called SMT hype when it was introduced. Current research in the field is looking at if you put neural MT on top of statistical MT, can you bump up the quality? And other research is looking at if you just make the two compete, so look at statistical MT output and neural MT output is neural better than statistical. So whether neural MT will be better, and for which language pairs and directions and contexts and texts is still very much an open question, but one that is being fiercely researched now in the machine translation field, and if it shows that it is going to be better than SMT, you can bet that we're going to be dealing with neural machine translation output. So if neural machine translation actually delivers on its promises, then machine translation will get better. But we don't know to what degree, and therefore we don't know what the implications are for us as translators, as educators of translators, as researchers. Some of the machine translation researchers and developers have suggested that machine translation's biggest impact in the future will be in the production of translated content for domains and languages that would otherwise not be translated. So in other words, those languages and that content for which companies and organizations say they don't have the budget to translate. That means that machine translation won't be taking jobs from translators. It will be focusing on translating content that is otherwise not going to be translated. But even if this is the case, then machine translation will not disappear from the world of professional translation and I think it's likely that it will increase in professional translation too. So the implications are that professional translators will have to deal with machine translation more often, and that our student translators will have to know about machine translation, be able to interact with it, and be able to make decisions on when it's appropriate to use machine translation and when it's not appropriate. So we could worry ourselves with the question, when will translators be replaced by computers? According to the translation scholar Alan Melby from Brigham Young University, this is the wrong question to ask. Rather we should be asking something like when will all specifications for translation result in a recommendation for machine translation only? In other words, when will the situation arise where machine translation is always the appropriate and the best solution? He argues that we shouldn't just train translators in machine translation but we should be training them about translation specifications which many of us would know as the translation brief and about how they should be making decisions regarding what technology is appropriate for which particular brief. In so doing, the students then become educated consultants for decision makers who are making decisions about the appropriate use of translation technology. Of course this question about translators being replaced by computers is really just part of the bigger question about human beings being replaced by computers. The point at which that could happen, the point at which computer intelligence becomes equal to or greater than human intelligence, is known, as some of you probably have read, as the singularity which is described by people like Kurzweil. This discussion about the singularity is very much linked with machine learning so the two go hand in hand. But the machine learning experts and especially those working in natural language processing are telling us that machine learning is a very very difficult paradigm. It's very very complex and at least those working in natural language processing are saying they don't see any sign of the machines becoming more intelligent than the humans with regards to translation anytime soon. So we can maybe breathe for a little while longer. But given the inevitability of some translation translators having to interact with machine translation for at least the foreseeable future and the fact that machine translation is sometimes tedious to correct, an important question I think that we should be asking is how can we make interaction better between the translator and the technology that the translator has to use. So earlier I mentioned that there's been criticism of computer translation tool developers for not taking their end users into account in the development of their tools. Translators report irritation with their translation technology and that irritation is particularly evident for machine translation especially when the translator has to correct the same error over and over again or if the machine translation system delivers complete nonsense that they have to look at and read. But as I mentioned developers are now paying attention to some of these issues and I think that's thanks to some of the research done by people in our field. So they're implementing new features that should make life better for the translators. For example many of you will know autosuggest in the STL status tool but also we now see developments such as interactive machine translation and on the fly learning from post editing in theory so that the post editor doesn't have to correct the same error twice. This is called adaptive machine translation by one of the commercial tool developers. Ironically on that developers web page very recently they describe the benefit of adaptive machine translation as, and I'm quoting, ensuring you avoid making the same MT mistakes again and again. So note how the machine translation mistakes are implicitly attributed to the translator through the pronoun I'm not sure they're even aware that they've written that I thought I might write to them after I mentioned this at the keynote so all of these developments are very interesting but if we have learned one thing from translation process research in the last number of years I would say that we've learned that translators are not computers and we are as human beings creatures of habit but we have individual ways of doing things and we see that a lot in the data that we collect from translators so I'd like to suggest that some of these new developments in translation technology could in theory take this individuality into account and apply it in order to improve translators interaction with their tools so I'd like to spend the last piece of my talk asking how might we do that so a key concept here is adaptation and personalization and these two concepts come together very often in the literature because they're interdependent Garcia Barrios et al say that personalizing is the same as adapting towards a specific user and some personalization researchers define personalization as the process that changes the functionality, interface, information content or distinctiveness of a system to increase its personal relevance to the individual so they emphasize that it's important to treat personalization as a process and not a system feature and they differentiate also between user adaptive automatic personalization in computer software and their preferred user driven adaptable systems where the user is in charge of adapting the system. Now personalization has been around as a concept in computer science since the 1980s and typical areas of application are e-commerce and e-learning which I'll come back to in a few minutes but what I'm talking about here in terms of adaptation and personalization goes beyond what the CAT tool and machine translation developers are currently doing so when they talk about adaptation at the moment they generally mean learning from the post-editing so what edits are done learn from that so that the same errors are not presented again. But first I want to look at what's the motivation for personalization so in the context of e-learning it has been found that individuation of learning materials not only increase motivation but also depth of engagement, how much is learned perceived competence and levels of aspiration so that's a lot of positive stuff there. The question is could we catch some of that and apply it in translation and translation technology to make things more positive for the translator who's interacting with the translation technology. Two of the researchers I already mentioned Ulis Fjerta and Blom claim that there's a link between well-being and personalization and they argue that personalization can lead to better autonomy competence and relatedness so let's have a look at these three concepts for a moment. Autonomy is about freedom and unpressured willingness to engage in an activity and apparently is negatively affected by surveillance, evaluation and deadlines now evaluation and deadlines are two things that translators are very used to and surveillance is really interesting because we now see some of the features in our research tools being implemented in computer aided translation tools for the purpose of logging so keyboard logging etc which could be counted negatively as surveillance. But I want to come back to this later because I want to argue it could also be used positively for the translator. Competence then is seen by the personalization researchers as a psychological need that provides an inherent source of motivation for seeking out and mastering optimal challenges so you can imagine that the imposition of machine translation with this psychological need might be compromised so the important question is how could personalization of translation technology contribute to supporting competence rather than making translators feel that their competence is being drained or attacked and then relatedness is the need to establish close emotional bonds and attachments with other people so as the task becomes more computerized it arguably becomes more dehumanized so this is an issue. So some of you will already have drawn links I think between the really interesting concepts here and the work that's already going on in translation studies in these areas like looking at productivity tracking in cat tools, ergonomics and well-being of the translator in the workplace, translator perception about translation technology. So that's the motivation so let's assume that personalization could be positive for translators. One of the questions is how is that done? So personalization is usually done through user profiling. Now if any of you are tweeting at the moment or if you've been on Facebook or LinkedIn this morning or yesterday or if you bought something from Amazon in the last year you are being profiled as a user whether you know that or whether you like that. So we're all being profiled one way or another in how we use technology and we may not like that but I want to argue that user profiling could be very beneficial for translators and translation technology in e-commerce for example we can profile users by looking at their likes and dislikes, how long they spend reading a page, how many click-throughs from a specific page. So you can already see how a translator who is doing web searching could be profiled for the domains that they're interested in and what resources they trust and what resources they maybe don't trust so much. I talked about the keyboard logging that we have started to use in research but is now making its way into computer-aided translation tools also eye-tracking. These things gather a lot of data which could be used negatively by means of surveillance but could also be used very positively to find out about an individual translator's behavior and their thresholds regarding fuzzy matches regarding quality of machine translation, regarding how often they feel they have to check for a term that comes from the machine translation system whether they trust the empty output or not. Context also plays a very important role in user profiling in general and context is really important for translation too so we could take the context or the specification that we've been given for the translation and tailor our machine translation or translation technology tool to the context so if we have to produce very high quality for example we might turn the machine translation output off if we're in a really big hurry and quality dare I say is not absolutely important then we might turn the machine translation system on so there are various ways in which this could be beneficial to translators. Of course all of this brings up the question of ethics and in user profiling it takes four aspects into consideration content, acquisition, privacy and trust I've dealt already somewhat with content so learning about what the translator is interested in and acquisition using their keyboard logging, eye tracking etc to find out data about them but I haven't dealt yet with privacy and trust these are two very important issues so obviously an individual translator would have to give consent to be user profiled so the technology could be tuned to their benefit the translator would also have to be convinced that allowing their activity to be recorded would in fact lead to a better technological experience for them in the longer term and that it wouldn't necessarily expose any of their weaknesses and that touches on an important thing which is user profiling via the cloud so if we're on the internet and we're doing things obviously there's software there that can gather information about us and this is one of the issues that translators have with online machine translation systems for example that confidential data is being transferred over to the cloud because of these privacy data confidentiality concerns we now see new developments in machine translation such as the Slate desktop which only sits on the individual translator's desktop is not connected to the internet in any way and only uses that translator's data to create a personal machine translation engine for their own purposes so these are interesting developments so I have only one minute left apparently so I want to move on and say let's look forward this is an interesting quote from Robert Dale as technological capabilities move forward the reality horizon or what counts as believable moves forward too so if we go back to when the dongles were introduced our reality was somewhat different from what it is today our reality in the next 10 years will be different again from what it is today so we don't really know what lies ahead but we need to be prepared for it and we need to prepare our students for it according to a 2016 report from an association of German research funders universities have to do more to prepare their students for the effects of digitalization and automation on knowledge based professions and they expect that the demand for academic skills will rise in the next 15 years and that new professions will be created at the boundaries of disciplines and I think this is really really interesting because we need to ask ourselves what are these new professions at the boundaries of the translation discipline I don't have an answer to that question I'm going to throw it out there but I have interesting answers already we see attempts to merge for example technical communication training with translation training which makes sense to me but can we think of other exciting interdisciplinary roles where translation and indeed interpreting competence is essential that will help us to move forward so I hope I haven't depressed you too much with all this talk of machine learning replaced by computers so I want to leave you with a little bit of comfort perhaps cold comfort again from Robert Dale and thank you all very much for listening It is working it just took a little time so the floor is yours so who would like to ask the first question for Sharon for we are wiped out but statistical machine translation has hit this quality ceiling is that due to the fact that a lot of the data are already generated by computers I think it's probably more due to the fact that the science behind it can't push the quality any further that would be my reasoning why it has hit the quality ceiling that we need new research in natural language processing to push that quality ceiling higher do we see another question please one of the things I was surprised that you didn't mention was the use of voice recognition and translation memory tools to what extent do you think that's blurring the boundary between the traditional work of interpreters and that of translators I didn't include voice but I think that voice is going to be a really important technological input into the tools in general and already we have people asking the question can we post edit with voice in other words can we use voice recognition translation output and instead of textually editing it you either accept it if it's okay or if you want to edit it you re-speak it but in a way that already has the errors corrected so is this blurring the boundaries between interpreting and translation possibly but I don't really count that as interpreting it's some other form of translation it's site translation there's something along those lines but I do think you're right that voice as input will play a very significant role also in the coming years I think there is a burning question for many of us because we're training translators and we're used to thinking of ourselves as language specialists and translators now the profession must be changing it has changed already but it will be changing even more you touched on it when you said that we should probably train our students to be technical communicators as well as translators but do you think that will be an appealing career for young people today I know they're used to computers a lot more than I am but do you think how can we appeal to students will it be intellectually challenging for them will there be status in that kind of job post-editing mainly no that's why I think we need to look ahead into the future very seriously and think about this question of interdisciplinary professions where people are not confined to just you know fixing the output from a machine but have other skill sets that they can draw on they're not just confined to that one thing so I think it's a very difficult question we need to be looking beyond the next three to five years because as we all know in a university things move quite slowly if you want to introduce changes and so on what are those competences and skills that we need to introduce that will continue to draw students and also to give them a meaningful career ahead of them I don't have the answer to that question but as I say maybe other people have suggestions about that because we do need to think about that very quickly I think I'm thrilled because we do do that I mean we saw the future coming but to make people who don't think of technical communication in just those terms that maybe have been described today I've just finished some research looking at the jobs of translators in places where they do combine communication in translation and one of the managers in one of the banks that I looked at said something wonderful she said even though she had this double training she said the most important course she ever took was the terminology course in her translation degree why because the amount of information that's being produced now especially with social media means that no one can find anything anymore there's no standardization for the general public for things that aren't necessarily technical terms so someone who understands the semantics of conceptualizing whatever is being translated or written is a precious person so there's lots of work to be done as long as you look not at the current courses that are being offered but actually the deep intellectual content of those courses thank you for that and I would just add to that one of the challenges of course is for us to express the value of that kind of skill or competence to organizations like banks and other organizations so sometimes when they cut budgets the terminologist is the first person to go they don't see what the value is in terminology or in knowing about terminology as an example what the value of that training is to organizations yes I have a statement or maybe it's also a question and that refers to what we can train our students do in the workplace and I think one of the skills that we need now and that we'll be needing even more in the future maybe to adapt the technologies we already have to new skills because what will stay is the needs of other people with whom we communicate and who will tell us we need this or this from your company and a future employee might be in a position to train tools and to combine tools in order to more efficiently and with more high quality produce these results and maybe also with more speed because we all have a high work density and yes of course I'd like to know what you think about this so your suggestion is I think that we should also train our translators our students to recognize how technologies can be merged for better purposes is that right yeah I mean the traditional translation student is not always embracing technology so I think that would be challenging but it sounds like something that could be interesting for sure thank you very much Sharon this has been very interesting and I'm sure the discussions will go on over coffee and so on but I know that Hele has some messages at least a very important one this is great okay just a few practical messages that I just wish to remind you that for lunch today there is like yesterday we're going to be the same place and there'll be three rooms I just noticed yesterday that sort of people landed in the first room and never got around to the other one so there are three rooms actually that you can spread over tomorrow lunch will be special because it will be you grab a brown bag when you go out of the building and then you just hop on the bus and you get a guided tour okay I also want to remind you that there are actually water coolers on all floors I'm not sure that you noticed that yesterday but there's always even when there's no coffee or lunch or anything there are water coolers and you can always help yourself and take water I wish to remind the poster presenters that there's a change of posters today well that's for all of you and there will be people there during the lunch break to help you put up your put your poster on display and I wish to remind all of you that you must vote for what you think is the best poster okay it's all on the website general meeting later this afternoon remember that voting will take place online so you really do need to bring a mobile device with you when you come for the general meeting it'll be a four o'clock this room please also make sure that you're connected to the internet so that things can run smoothly we may actually be able to vote very very quickly if only your mobile devices work some of you have been asking what does the red dot mean it actually means that you have signed up for the conference dinner tonight so the conference dinner well if you don't have a red dot and you think you should have had a red dot please go and ask them at the welcome desk they can help you with anything I also want to point out that we have a discrepancy in a way because in the printed program you have conference dinner at 7.30 whereas on the web it says 7 and it's sort of in between so we would if you can at all and if we are able to finish the general meeting at 6 as planned we would like to see you at 7 there'll be drinks cocktails down there but if you cannot make it till 7.30 well the first course will only be served at 7.30 so drinks from 7 we would like to see you at 7 but if you're a little bit later it's okay that's it thank you thank you