 Hello everyone and welcome to my presentation on taming computational complexity. My name is Samin Rf and I'm from the computer science department. I've been working on networks, structures like this, with nodes and edges. Edges are the lines that connect to nodes. Whenever we want to analyze any sort of connectivity, a network like this is the perfect model. We can model people and their relationships using networks, where nodes are the people and if two people know each other we connect them using an edge. We can also model friends and enemies using a network with positive and negative edges. In a network like this an essential question is whether enemy of an enemy is a friend. This seemingly easy question is fundamental in understanding the implications of positive and negative interactions that go beyond the social context. In this network enemy of an enemy is a friend and we call it a balanced network. However, in many cases we're dealing with networks like this one on the right that is not balanced. But an interesting observation here is that there are only a few edges that can be removed to make the network balanced. In here if we can somehow destroy the friendship between Sarah and George, then the network becomes balanced. We call the edges that needs to be removed frustrated edges. But the problem is that the precise analysis of many of these networks is not feasible due to the computational complexity that is beyond our most advanced technologies. To put that into perspective animating the solutions for a network like this takes hundreds of years on the world's most powerful supercomputer running full speed. In order to find the frustrated edges we can color the nodes using black and white. Then a frustrated edge is either a positive edge with different endpoint colors or a negative edge with the same color on the endpoints. This way we can look for a node coloring that minimizes the total number of frustrated edges. If we represent the number of nodes with n then there are two to the n possible colorings that is our problem space. In this network we can color the nodes like this which leads to one frustrated edge but finding the optimal solution in non-trivial examples is not easy. Suppose that the three-dimensional space of this lecture theater is our problem space then finding the optimal solution is as difficult as hitting a static invisible target that can be potentially anywhere in this lecture theater. Let's see some real examples. Here is a network of New Guinean tribes and their positive and negative relationships. The green and red edges represents alliances and enmities between these tribes. In here we have 16 nodes therefore there are two to the 16 possible cases that needs to be checked by the computer. Here is another network representing a group of monks and their positive and negative relationships. In here we have 18 nodes therefore there are two to the 18 possible cases so when we increase the number of nodes by two then the number of cases to be checked will be increased by 200,000. This phenomenon is what we call computational complexity. Many researchers in the past have attempted solving this problem in very small scales like the problem space size of a shoe box. Their algorithms are based on a random search this is like throwing a bouncy ball with infinite energy into a random direction as it bounces back whenever it hits a target whenever it hits a wall it will eventually hit the target but these random search algorithms are not practical for a problem space larger than size of a shoe box. Here we have 329 nodes which means that we are dealing with a number so enormous that I can't even pronounce its name. My research looks ahead to the extreme limits of solving hard optimization problems where enumerating the solution takes hundreds of years. We didn't have a hundred years but we had a determination to tame the complexity and solve these problems in large networks like this which represents the interaction map of a white blood cell. So we decided to start with three years of thinking, simulating and experimenting in the hope of getting some discount on the remaining 97 years. In these three years we discovered and took advantage of the hidden structures of these networks. We have successfully restricted the problem space into a cone-shaped structure. Now if we throw the bouncy ball into the cone it converges and hits the target very fast. Our main achievement is that using a standard university-issued computer our algorithm can allurize very large networks in a few minutes. Besides making new computations possible our algorithm is the only one that comes with a guarantee of solution quality. This is equivalent to building a machine that checks and verifies something much greater than the number of atoms in the universe. Let's see some other examples. Here is a portfolio of 11 investments. The green and red edges represent positive and negative correlations between these investments. Financial experts come up with these portfolios that have a good performance under different market conditions and our algorithm provides a measure of predictability that explains their performance. Here is a fullerine, a molecule made of carbon atoms. Fullerines are closely related to carbon nanotubes that have very interesting properties. Compared to steel they are 10 times stronger and 6 times lighter. This inspires thinking about advanced materials for building bridges and airplanes in the future. They also have fascinating electrical properties that lead to the idea of making nanowires. But there are many challenges involving the chemical stability of fullerines that is yet unknown to us even after multiple Nobel Prize-winning discoveries in this field. Well the output of our algorithm provides a measure of predictability, a measure of chemical stability for a fullerine graph that was thought to be impossible to compute. Here is an example from physics, a model representing patterns of atomic magnets. In a model like this an essential concept is the minimum energy state which can be computed using our algorithm. So is it just another high-performance algorithm? No, this is a game changer in many fields of research. We cannot better understand the interactions among numerous genes or gain valuable insights on the energy of magnets and stability of nanomaterials. This also allows us to answer essential questions in business like the predictability of a financial portfolio. Here we have a network of positive and negative international relations between countries. Using the algorithm, we can now simulate numerous scenarios leading to polarization of countries. This is essential in preventing a cold war situation where enemy of an enemy is a friend. Thank you.