 For better or for worse, our world is in the midst of a silent algorithmic revolution. Many of the decisions that humans once made are being handed over to formal mathematical models. Today, we expect algorithms to provide us with the answers. Who to date? Where to live? How to deal with economic problems? With the correct algorithms, the idea is computers can drive cars better than humans, trade stocks better than Wall Street traders, and deliver to us the news we want to read better than news publishers. As Karen Nguyen of Kings College London proposes, just in the way that social theorists have identified markets on the one hand and bureaucratic hierarchy on the other as two quite different ways to coordinate social activity, I'm going to suggest too that algorithmic regulation can be understood as a third form of social ordering. It's different because its underlying logic is driven by the algorithm which is mathematical. It would appear that more and more authority is shifting to these automated systems. It is then important for us to ask what are the consequences of doing that? The fundamental issue is really that of the interaction between the informal world of people and the formal world of computers and trying to translate between these different systems of organization. With cloud computing and advanced analytics, we're extending computing out into the real world like never before. Through these platforms and the algorithms that operate on them, we're trying to bring in and coordinate more and more spheres of our social, economic and technological systems of organization. As we do this, we're trying to take an informal world that has evolved over a prolonged period and bring it into the world of formal systems. The math and scientific framework out of which we build our algorithms is unfortunately not a universal language but is very much a partial language that is heavily dependent upon a reductionist paradigm that creates many limitations in its capacities. The resulting models are not in any way a neutral interpretation of reality. Just as data can deceive, models likewise can deceive. All paradigms and theories are only partial accounts of reality and the models that derive from them are never neutral. They reflect the particular paradigm upon which they're based. All of our mathematical and scientific frameworks are incomplete. All models are based upon opinions and perspectives about the way the world is. Some of those are better than others but none are complete. The mathematics that we know is beautiful, pure and logically consistent until we look outside the window and realize that the world is not full of triangles and squares and smooth curves and that's a continuous problem as we take these models out into the world. Even if the process is automated, the algorithms used to process data are imbued with particular values uncontextualized within a particular scientific approach. This is not just on the fundamental level of the underlying science but also on a more practical level. Algorithms don't do anything on their own. They reflect the social and institutional truths of the world in which we live. If for example society is racist then that will be in the data and the algorithms will pick up on that and it will also pick up and amplify any other bias. Take for example an employer trying to figure out who to hire given that men have always been more successful in certain careers than women because of all kinds of institutional bias then the algorithm is likely just to reflect that. It just tells you you should hire these men because they've been more successful in the past. So there are lots of ways in which algorithms can reflect both our incomplete knowledge and also reflect the bias that we live with every day and they will hide these behind the guise of neutrality and objectivity but also because of the scale, scope and power of the technology that can potentially have mass effects and this is something unfortunately we will inevitably find out more about as we build out this IT infrastructure to our global economy. The negative externalities of these algorithms are outlined in a recent book by Kathy O'Neill called Weapons of Math Destruction. She defines weapons of math destruction as mathematical models or algorithms that attempt to quantify socially important aspects such as credit worthiness, teacher quality, insurance claims, college ranking, employment application screeners, policy and sentencing algorithms, workplace wellness programs etc but have a harmful effect and often reinforce inequalities for example keeping the poor poor and the rich rich. In the book she expands on stories of people who've been marked as low ranking in some way by an algorithm such as the competent teacher who was fired due to a low score on a teacher assessment tool the people whose credit card spending limits were lowered because they made purchases at certain shops the college student who couldn't obtain a job at a grocery store due to his answer on a personality test the algorithms that judge and rate them are completely a pack and cannot be questioned by those affected people often have no capacity to contest when the algorithms make mistakes the author lists three common characteristics of these weapons of math destruction they are often proprietary or otherwise shielded from preying eyes so that they are in effect black boxes they affect large numbers of people increasing the chances that they will get something wrong and they also have negative externalities on people in society most platforms are privately owned enterprises and do not wish to expose the internal workings of their algorithms to the view of the end user added to this the complexity of these systems often overwhelms people's capacity to comprehend them in this respect subprime mortgages are perfect examples of weapons of math destruction most of the people buying, selling and even rating them had no idea how risky they were this of course extends to the whole of the financial market where one can only speculate about the algorithms that might be out there machine learning algorithms operate in high dimensional space in order to process possibly millions of parameters which is hard for us as humans to comprehend communicating such things to humans will require new uses of visualization so that people can quickly understand in an intuitive way how the system works the only sustainable way to develop these technologies is by keeping people informed and engaged if we want to develop these technologies in a sustainable way then we need a system of design that includes transparency and accountability that means integrating the language of the machine and that of humans by creating visualizations and other methods that quickly and intuitively communicate what the underlying technology is doing technological crisis inevitably occur when technology becomes too complicated, too coupled, too opaque and obfuscated and something goes wrong in the system the other issue is that of decontextualization which results from the narrow form of intelligence that analytics represents the problem with analytics is that it decontextualizes by focusing on things it isolates them from their context and leaves them open to misinterpretation by simply looking at them from one perspective we can gain greater detail from that perspective but we can also lose the relevant connections that give it full meaning and implication advanced data analytics enables us to see further to focus more clearly to pick out the needle in the haystack however the more powerful we make the telescope the more focus we've become and the more decontextualized the information becomes which can be an issue as it becomes easier and easier to optimize for a single parameter but create more and more negative externalities on those other metrics that are not captured due to the narrowing of our vision finance is a good illustration of this because of the quantitative and complex nature of financial markets it has been probably the most advanced user of algorithms and a good illustration of where we're heading with the technology finance is obviously very focused on optimizing for monetary outcomes real-world economic goods like food and energy get brought into the financial system and made available for trading on global markets algorithms then operate on them with the sole focus of optimizing for profit the consequence of that can be food riots in Egypt when the price of grain goes too high or it can be elderly people in Canada who can't afford the price of their heating gas during winter because of speculation as we start to connect everything up through cloud platforms more and more we're operating within very complex systems and narrow algorithmic optimization in one place can lead to unintended consequence in others it is important to note that algorithms are analytical tools they are built out of the analytical capacities of digital computers analytics always acts on data all algorithms take in data and perform some operation on it however data is always from the past there's no such thing as data from the future this has the implication that these models can only tell us about a future that resembles the past of course we can put all sorts of non-linear and stochastic stuff into the models to make them look more like what happens in the real world but at the end of the day their essential reference point is past data which makes them inherently conservative this is fine if the system you're dealing with is in a normal state of development but that's not always the case sometimes major changes happen and the model is unlikely to be able to tell us much about that this is exemplified by the extraordinary poor predictive capacities of economic models in relation to major financial crisis in order to get a future that looks qualitatively different from the past you need a theory data and analytics are not going to help you with this as such data without theory can knock you into the past analytics will always tend towards reinforcing past patterns data analytics systems don't know what might be what could be or what we might want to be as such they often create self-fulfilling path dependencies in an article in Harvard Business Review entitled Learning to Live with Complexity the authors write in complex systems events far from the medium may be more common than we think tools that assume outliers to be rare can obscure the wide variations contained in complex systems in the US stock market the 10 biggest one-day moves accounted for half the market returns over the past 50 years only a handful of analysts entertain the possibility of so many significant spikes when they constructed their predictive models this is one of the key problems with an over-reliance on analytical reasoning it tells us that the future will be similar to the past and because it often is it lulls us into a false sense of security even though major changes happen rarely because they can be so large and the incremental changes are typically so small the unpredictable paradigm shifts end up being more significant than all of the linear incremental changes that the system predicted so well to create real change, change in paradigm we need something qualitatively different visions, imagination and theories can inform us of futures that have never existed while algorithms are not really designed to tell us about that we can try to get computers to think outside the box but this is not what analytical reasoning is designed for it's like trying to put a screw into a piece of wood with a hammer you'll get much better results in the long run if you invest in using the correct tool for the correct application