 Hi everyone, my name is Rehana and this is Ed, and we both work for a tech startup called BrightSign where we have developed a glove that translates sign language to speech and text in real time using machine learning. In our talk today, we're going to tell you a little bit more about BrightSign and the glove that we made, the tech behind it, the software that we use, and the design of the gloves. But before we jump on to that, I want you to meet someone who has been of significant importance and influence on the foundation of BrightSign. So everyone meet Kayden, and Kayden was born deaf to an all-hearing family. His mother came up to us very early on before BrightSign, before all the gloves, and she was wondering whether there was anything out there that would enable her son to interact with his peers freely. So we went on to do some research ourselves, and we quickly found out that Kayden was not alone at all. There were over 70 million sign language users around the world, and over 90% of them, which is the vast majority, are just like Kayden. They're born deaf to an all-hearing family, which is not an issue itself, but of these children, only 25% of their parents can sign. That means that they have a very small circle of individuals they can interact with. And on top of that, only 2% of these individuals have access to assistive technology. This is a very low number, and we would wonder why. So there are three primary problems that really affect the fact this number, as to why we have such a low rate of people having access to assistive technology. By far the biggest issue for people all around the world is that of cost. Of products, these products in the marketplace today that allow people who can't speak and who are deaf to communicate with people who can and who can't sign, is the fact that these products are generally priced in the thousands or even tens of thousands of pounds each, and that's just prohibitively expensive for the vast vast majority of people. The second issue is device dependency. A huge proportion of these products also require the use of a smartphone or some other technology that you have to keep on your person. A lot of them require connectivity to the cloud at all times, and it's simply not practical to always have a connection to the internet or to some kind of mobile network whenever you want to speak to people. None of the rest of the world has to deal with that, and it's not set up, the world is not set up in a way to deal with those problems. The third issue is that of customization. A huge proportion of people who are deaf and who do use sign language also don't sign perfect sign language, particularly those with other disabilities. A lot of people, adults, they can't form perfect signs because they have some disability with their hands or that affects their movement, and as a result they have a variation that they use. In the case of many young children when they're learning to sign, they often form their own variations on signs, some of them form entirely new signs themselves, and all of the current products that are out there, they don't offer this level of flexibility, the ability to truly customize what you're saying and how you're saying it, and that's something that we aim to address. So what's the solution? It's a smart glove that allows us to translate sign language to text and speech. Rihanna is going to give you a quick demonstration. This is a smart glove to translate sign language to text and speech. We've run it through the system because otherwise you wouldn't be able to hear it coming out of the glove. But yeah, as you can see, you literally sign, you press a button, and it speaks. There's in the final version, we've also got a display on the glove such that if you don't want it to be verb, to be verbally spoken, instead it can be just displayed in text form. First I'm going to speak quickly about the hardware that goes into it. At the core of this prototype there's a Raspberry Pi Zero and connected to that we have several sensors. We have five flex sensors that run up the back of each of the fingers and the thumb. These measure the bending of the fingers. We also have an accelerometer on the back of the hand to measure its movement and a gyroscope to measure orientation. Together these allow us to have a really good picture of how the hand is moving through space and the orientation of all of the fingers, and that allows us to work out what the signs are. Next thing I'm going to briefly speak about the software architecture that sits underneath it as to how we translate from those sensor inputs to our outputs as whole sentences. We start off by taking these values that are coming in from the sensors and we concatenate them together into one single input vector. Once we've done that, we have this as a single bound feature. Once we've done that, we can either do two things. First thing I'm going to talk about the normal usage of the glove. Just when you're using the glove, you're signing and you want that translating into speech. As you can see, that's on the right-hand side of the diagram following down. These concatenated input feature vectors go into a classifier, that splits them up and it classifies them into each gesture as a word which is represented as a class in this overall dictionary of mappings of gestures into words. These words are then joined together to form a text representation of individual phrases which is joined together to form entire sentences. As you can see going down the bottom right, these are then passed to the text-to-speech engine which then goes out either out of the DAC and out of the speaker on the glove or instead goes to the screen. Something that we're looking to do in the future with this text output is not just allow communication with people in the speaker's native language but using, in this case, we're planning to use IBM's Watson's language translation tools to allow the person signing to communicate not just with people say in English but with people in all other languages as well. On the left-hand side of that, sorry, up there for you, you can also see, this is the training system. So when a user wants to customize the system or add their own gestures or edit a gesture that's currently in there, this is the system that allows them to do that. They start off the same way by performing that sign and then they connect that sign, they bind that sign to a new word or phrase that they've added. This is then stored as part of, again, that word numerical class dictionary data structure either on the cloud or on the glove. We haven't been spending all our time in the labs, of course. We have been out and about interacting with the community. For example, we are currently conducting a third academic study with four different special needs schools and local councils with children from different abilities and different backgrounds where we are measuring the social impact the glove has on their daily lives. These interactions and these studies give us lots of feedback which we take on and put directly back onto the design and prototyping. We are also creating a support network so that we can access our end users more easily. We are partnering up with local experts in relevant fields. For example, we have been, we recently collaborated with Exa Health Tech and through that we have access to leading healthcare research facilities. We've also partnered up with a London Grit for Learning which is the number one assistive technology provider in London schools and through our partnership with them, BrightSign has now been placed on the forefront of assistive technology platforms. We have also gained some media traction where we were able to reach more people through major networks like the BBC where our founder went to talk about BrightSign and the Glove and also did a demo on The One Show. We've also appeared on Discovery Channel and Forbes as we are currently going, as we're currently pitching for investment. This is our team. We are a small team but very talented, very diverse, put together by our founder Hedil Ayoub. We've also won some notable awards especially in the fields of healthcare, innovation, wearable technology and AI in social care fields. Most of these awards come with support and networking which again has allowed us to access and reach our end users more easily. So the next thing I'm going to talk about is some of the things that we want to develop in the future in the coming weeks and months as we go up to our release. First thing I'm going to talk about hardware. I know you guys like hardware. It's why you're here. The first biggest thing that we really want to address is that we want to move from a Raspberry Pi to a custom PCB. Now the Raspberry Pi Foundation are amazing. They release new Raspberry Pis all the time with these amazing updates. However, with all of these new hardware releases comes the issue that there isn't any long-term support for manufacturing for any one of these releases. And so when you're trying to manufacture a product that needs to be manufactured consistently for a long time, if we were to use the Raspberry Pis, when new versions came out, we'd have to continually update the software that goes on them in order to keep it compatible. For this reason, we want to move to a custom PCB with a microchip on it, just a box standard microcontroller that's connected directly into the sensors. The second thing we want to do is we want to try and move all of the operational logic for the glove, all of that software on board. As I mentioned at the beginning, we've currently got some things such as data storage which is happening in the cloud. And while that's great, it's very efficient. It makes it easy for us. It does also require cloud connectivity. One of the next things that we're trying to do is bring all of that onto the glove so that there's no need for any connection externally at all. Lastly is the feature of buttons versus gestures. As you saw Rayana use the glove, she pressed a button to use it. We don't think that's necessary. After all, you're already using gestures to sign to the person you're speaking to, so why not just have gesture control? And this is something that we're really looking to implement. This is again one of the must haves before we launch. Next, in terms of software, we want smarter classifiers. The current classifier that we're using is around 96% accurate. And for us, we don't necessarily think that's good enough. We want to try and get it higher. Another one of the issues with the current classifier is we want to be able to have a much, much larger dictionary of words, and also with some sense of context of words. If you're having a conversation about food, then we want the words that it chooses to be accurate based on that context. It's highly unlikely if you're having a conversation about cooking that you might need something related to cards, for example. And as a result, we can bias to get towards food in that context to get more accuracy. And the same is true for other topics. Next thing we want to do is two-way communication. It's all very well and good to provide a system that allows one deaf person or someone that can't speak to someone that doesn't speak sign language. However, what about communication the other way? What about the person who can't speak sign language, who can't sign talking to the deaf person? This is something else that we want to solve, and we have a few potential solutions to this that we're currently working towards implementing. Lastly, we want to integrate the system with surrounding technology. We don't realize it, but in the modern world, we don't just interact with other people by speaking. That we have devices such as the Amazon Alexa that we speak to. And it seems fairly obvious to us that, well, why don't we integrate with those? Why don't we allow someone that can't speak to also sign to hands-free devices? And so that's something that we're looking at doing as well. In terms of the design of the glove, we have designed the glove in such a way that there is an inner layer to house the hardware. And the hardware is always encapsulated, insulated, meaning that it can be removed. And the gloves are, because I work with 3D-printed fashion and e-textiles or smart textiles, we are currently combining 3D-printed techniques and embroidery to design and create gloves that are suitable for children and adults alike without looking like an aid, but more like a fashionable piece that they can change according to their taste and style. I'm going to leave you with a quote. I'm going to close the talk with a quote from which BrightSign derives its vision. Okay, we hope to give a voice to those who cannot speak. Thank you. Thanks very much, guys. Do we have any questions? If you do, throw your hand up and I will come over to you. And keep them up as well so I can find you again. Yes, you mentioned different languages and context. From what I understand, there are also different sign languages. There's British sign language but also other sign languages. How are you going to deal with that? So that's actually kind of a non-issue with the technology that we've already developed. As I said, towards the beginning of the talk, part of the whole point is that you can add in any custom signs you want, including an entire language if you wanted to. Currently, the glove is based on ASL. However, we are looking to ship with other sign languages included as a library as well. Does that answer your question? Yes, I think so. You also touched briefly on context and languages. Does that mean that if you're using phrases or colloquialisms, that it will be able to pick up on that? I mean, is that part of the teaching thing? So in terms of, given we're talking primarily about sign to spoken language translation here, in terms of training, if you have a sign for a colloquialism that isn't included, you can just add that. There's nothing to stop you from doing that at all. I wanted to ask, when you were signing, it was picking up the words too. I've worked with BSL before. In England, we've got BSL. We always have sign supported English. But in BSL, there isn't a sign for words like the is to. How does that work with the glove and interpreting that? And then like word structure as well, like we'd say your name, what in sign language, rather than what is your name? So this is primarily, I am not by any means fluent in any form of sign language at all. I'm a tech. I build things. However, from my understanding, we're solving this problem. That's essentially a language specific problem, right? There's no one ever says the word what on its own. And so when you have a library for British sign language, that includes that sort of level of abstraction in there. Does that make sense? Does that answer your question? I mean, you can just say what by itself. What? In the context of the sentence, you're never going to say what's your name. It doesn't make grammatical sense in English. And so when we're putting together that whole representation of a sentence, just to clarify, it isn't just sort of a dumb sign by sign thing. We are interpreting it as an entire sentence. Okay. Yeah, that was okay. Sorry. Thanks. Anyone else with a question? Okay. A round of applause. Thank you very much.