 For some time now we've had apps that can learn to recognise your face and unlock your phone or pick you out in a photo. But what happens when this same technology is used to pick you out in a crowd or track your movements or even make a judgement call on what sort of person you are? Facial recognition is attracting more and more attention for its potential downside and for good reason. No one likes a technology that potentially threatens what's important to them including their safety, their autonomy and even their liberty. But navigating the risks and the ethics of facial recognition can be tricky. For instance, facial recognition tech that can benefit you as an individual would seem to make sense, unlocking your phone say or proving that you are indeed who you say you are. But what if the tech only works for some people and not for others? If your face doesn't fit the tech, would that be fair? Especially if we get to the point where your face becomes your passport to purchase in groceries, boarding planes or even voting. Okay so this doesn't sound so good but what if we eventually have apps that use facial recognition to tell us when we're getting sick for instance or when we're mentally run down or even when we need to seek help from others. This could be a real boon for helping us stay healthy and well unless of course you only get advice you can trust if you had the right skin color or just the right facial features. Challenges like this can be overcome if developers are as responsible as possible in avoiding bias in facial recognition apps and develop them to work equally well for all users. But what happens when bias becomes a feature rather than a flaw? Imagine for instance an app that claims to be able to tell if you're lying or you've committed a crime or that you might commit one just by scanning your face. Now imagine that this app gets things wrong or it reflects the biases of the people who created it so that if they think you look like a liar or a lawbreaker this is what the app confirms. And imagine that this app was then used by employers in law enforcement as judge and jury. In this way facial recognition has the potential to reinforce our very human biases and prejudices rather than reducing them. This is a big issue in how facial recognition is used and one that it's going to take a great deal of care to avoid and it's only going to be compounded if the technology ends up being used to monitor our every move no matter where we are and what we're doing. But beyond bias what if there were ways in which facial recognition could tell us more about ourselves than is possible with our human senses alone. For example what if we had an app that could simply scan a crowd of people and tell who has cancer, who's struggling with mental illness, who's sad, who's happy and who is persuadable. I'm edging into speculative territory here but researchers are actively researching how facial recognition can reveal what is otherwise hidden to us and how this can be used just from photos and videos of people. In some cases this might actually be beneficial but capabilities like this also come with some really large questions around who has the right to analyze you without your consent especially if they then use the information for their benefit not yours and it opens the door to social manipulation where your face becomes the key to somebody else's success. The bottom line here is that while there are many potential benefits to facial recognition there are also a lot of ethical hurdles to overcome if the tech is to be developed and used responsibly and here we should all have a say in how the technology is used and how it is not. For more information on the risks and ethics of facial recognition do check the blurb below and as always stay safe.