 Internet search engines, digital assistants and language translators, they seem magical. And that is because they are composed of subsystems and subsystems, and even more subsystems that address different aspects of human language. And all of this in order to find cheaper restaurants near me, or to translate into Danish, is this the train to Copenhagen? When these systems fail, it is often due to their flawed understanding of human language. And these failures have real-world consequences, like a translation creating an embarrassing situation, or a message saying, I love you, sent to the wrong person. In my PhD, I researched one of the fundamental aspects of computer systems working with human languages. It's called word representations. You see, computers cannot understand the meaning of words. They need us to give them ways they can learn representations of words. And these representations have to be meaningful. They should encode information that computers can use. Unfortunately, computing word representations is a time-consuming task. Even on powerful and expensive computers, it can take weeks and sometimes even months. And when the representations are computed, it is often not clear what information they contain and how we can use that information to address the causes of embarrassing situations. My research has shown that a popular kind of word representation, called word clusters, are highly effective at encoding information about grammatical rule. Thus, they make it easier to distinguish between nouns and verbs and adjectives and adverbs. We can now construct systems that can address tasks that require information about grammatical rule. However, constructing word clusters still takes time. So we looked at the way they are constructed and found ways to reduce this time. In some cases, computer training stations have been reduced from three weeks to three days. In another project, we turned to word representations in order to make it easier for humans to read text with lots and lots of abbreviations. We used a kind of representation, called word vectors, to construct a system that reads every single article on Wikipedia. And it does that in order to find out what abbreviations exist, what do they mean, and when are the different meanings used. With this information, our system can disobey great hundreds of abbreviations directly in place in the sentence where they are used. This way, making text easier for humans to read and avoiding those cases when you have to turn to a colleague and say, excuse me, what does PCB mean in this sentence here? These are just a couple of the ways. Research into word representations helps systems better understand what we mean when we say words. So next time you find yourself embarrassed by a computer's clumsiness with human languages, just forgive them. They're still learning.