 Artificial intelligence technology has evolved significantly over the past few years. Modern AI systems are achieving human level performance or cognitive tasks like converting, speech to text, recognizing objects and images or translating between different languages. This evolution holds the promise of helping us understand and perhaps solve the environmental, economic and societal challenges of the 21st century. Many of breakthroughs in AI are obtained using deep neural networks or DNNs. DNNs are complex machine learning models that are a bit like interconnected neurons in the human brain. They are capable of dealing with high-dimensional inputs like photographs. But DNNs are susceptible to attack, specifically where an attacker can modify an input to produce an incorrect response. By adding an imperceptible amount of noise to an image, it completely alters the internal activations in the DNN, leading it to misclassify the image. Attackers can also evade face recognition systems by wearing specially designed glasses or defeat visual recognition systems in autonomous vehicles by sticking patches to traffic signs. This poses a real threat to the deployment of AI in security critical applications. IBM Research Island is working to mitigate those threats and is releasing the Adversarial Robustness Toolbox. The Toolbox is an open source library designed to help researchers and developers in defending DNNs against attacks. Researchers can use a Toolbox to develop and benchmark novel defenses against state-of-the-art attacks. For developers, the library provides interfaces which support the composition of comprehensive defense systems using individual methods as building blocks. Defending DNNs against adversarial attacks has three elements. Hardening models, measuring robustness, and detecting adversarial inputs at test time. Common approaches for model hardening include filtering the inputs or changing the internal architecture of the deep neural network such that the adversarial samples do not propagate through the internal hidden layers. The robustness of a DNN can be assessed by measuring the loss of accuracy on adversarial inputs. Other approaches quantify how much the internal representation and output of the DNN vary when small changes are applied to its input. Finally, runtime detection methods can be applied to flag any inputs an adversary may have tampered with by identifying abnormal activations caused by the adversarial inputs. Ultimately, any defense system needs to stay one step ahead of the attackers that would try to bypass it. The adversarial robustness toolbox provides a testbed for researchers to design comprehensive defense methods and for developers to deploy them in real-world AI systems. The adversarial robustness toolbox is publicly available on github.com. The release includes extensive documentation and tutorials to help researchers and developers get quickly started. Our ambition is to create a vibrant ecosystem of contributors that will stimulate research and development around adversarial robustness and advance the deployment of secure AI in real-world applications.