 The rise of smart systems represents a natural evolution to our information technology. But with this next generation of information systems, we're both vastly expanding our technological capabilities and also consolidating and handing over an extraordinary amount of power to these algorithmic systems. And because of this, there needs to be major consideration given to the appropriate use of that control and power, along with more traditional concerns surrounding security and access. The scale of the risk involved in this mass automation is unprecedented as our critical infrastructure becomes automated, networked and remotely controlled via common smart platforms. Today, a typical car's airbag, steering and brakes can all be hacked and controlled through the internet for malicious ends. Control systems in nuclear power plants may be broken into and with the rollout of IoT platforms, software will soon be permeating all types of technology as our critical infrastructure becomes increasingly dependent upon it. Autonomous agents can be understood as essentially advanced optimization algorithms. When we let a machine autonomously pursue goals, we don't know exactly what action it will take. With only a limited and narrow form of awareness that it's trying to optimize for a limited number of parameters, many negative externalities can result. For example, corporations are a form of agents within the free market capitalist system that is designed to optimize for shareholder value and financial returns. And we've long seen how this narrow focus on profit due to the structure of the incentive system can lead to negative environmental and social externalities. Indeed, it can be identified as a key driver of the current sustainability crisis. This illustrates how narrow analytical reasoning that is not supported by or operating within some broader form of awareness to the overall context often needs to negative externalities that create unsustainable results. We can say a system is under control and operating in a sustainable fashion when its actions are integrated with the broader context. The problem with smart systems is their narrow analytical form of awareness as autonomous agents become more autonomous, given greater scope to define the means through which they achieve a given ends, there is greater potential for them to perform acts that are misaligned with the overall context in their narrow pursuit of their ends without overall awareness of the environment within which they operate. For this smart technology landscape to be developed in a sustainable fashion, there needs to be a systems of systems approach to control, where more narrow and specific forms of smart systems are nested within larger more general forms of awareness which are in turn coordinated and monitored by broader forms of human intelligence. A system is only really in control when awareness, responsibility and power are all aligned. This means the exercising of control through a multi-tiered framework with more intelligent and aware systems guiding systems that are lower in their capacity for information and knowledge processing. Whereas information and data may be growing at an exponential rate, this only works to make intelligence an increasingly scarce resource. Information technology, on the one hand, commoditizes information and data, driving its value right down, but because of this it also increases the value of knowledge and intelligence, making them scarce resources. Wherever there is a demand for a scarce resource, there is a hierarchy based upon access to that resource. This drives a new form of hierarchical structure that emerges out of the information revolution captured in the acronym D-I-K-W, which stands for data, information, knowledge and wisdom. Controlling these systems in a long-term sustainable and secure way means understanding this information hierarchy and building it into our systems technology so that this world of complex information systems that we're going into is governed and directed by true knowledge and insight of context and consequences. The rise of smart systems can be seen as a whole new level to our development of technology and like all technologies, it holds out the possibility to both enable us or to constrain us depending on how it's designed, developed and operated. Perhaps in the abstract, technology is a neutral thing, but all technologies have to go through a design and development process and how that process is carried out will determine to a large extent whether the technology is constructive or destructive in nature, whether it works to ultimately enable people or constrain them. It is possible to industrialise an economy without creating the negative environmental externalities that our particular set of industrial technologies created when we built them, thus they can't be said to be neutral. A combustion engine that emits toxic fumes is not a neutral thing, it is destructive in this sense. Industrial development may be inevitable and its evolution in the abstract may well be a neutral thing, but how we conduct that process of development is neither inevitable nor neutral, thus there is responsibility associated with it. The negative externality of smart systems is the potential for an excess of narrow analytical reasoning which smart systems represent a massive proliferation of and a lack of broader synthetic reasoning to balance and direct it towards constructive ends. The computer scientist Stuart Russell summarised this issue as such. This is essentially the old story of the genie in the lamp, or the sources apprentice or King Midas. You get exactly what you ask for, not what you want. A highly capable decision maker, especially one connected through the internet to all of the world's information and billions of screens and most of our infrastructure can have an irreversible impact on humanity. This is not a minor difficulty, improving decision quality, irrespective of the utility function chosen has been the goal of AI research, the mainstream goal on which we now spend billions per year. An excess of analytical reasoning and lack of synthetic reasoning could take us into a world where we have an extraordinary amount of technical capabilities and power without sufficient knowledge and wisdom to direct it effectively, the result being unsustainable outcomes. For the opportunities in smart systems to be realised and the negative externalities limited would require a concomitant, massive expansion in synthetic reasoning capabilities and the appropriate control and alignment of smart systems within larger, more intelligent frameworks of organisation, in such a way ensuring its correct alignment and ultimately the appropriate use of that power towards ends that are integrated with the broader context and thus sustainable in the long term. As far back as 1960 Norbert Weiner said, we'd better be quite sure that the purpose put into the machine is the purpose which we really desire, as the machines get smarter and more powerful it is our job to stay thinking about the context, to think about the overall desired outcomes and align the means with those. An expansion in technological means requires an expansion in human ends and an alignment between the two in order to develop in a sustainable way.