 It's a great honor for us to be joined by Steven Wolfram. Steven, as many you know, is a computer scientist, a physicist, is the founder and CEO of Wolfram Research, where he designed the Mathematica and the Wolfram Alpha engine and the Wolfram language. Steven is a prolific writer and speaker. He's recently on a renewed quest to find a fundamental theory of physics. So why did we invite Steven? So Steven is connected to our conference and our community in at least two important ways, I think. As you may know, reclosures driving theme this year has been data science. So this is one connection. But Steven is also the designer of the Wolfram language, as I was mentioning before, which is inspired by this in being symbolic and functional. So we definitely have some ancestral interest that I think Steven is going to talk about today. Well, enough for me chatting now. Let me introduce you to Steven Wolfram. Hi. Nice to with you all. Well, I was just looking at my archive. And I see 2008, I see a message saying, closure seems like an interesting language. So I've been looking forward to learning more about closure for a great many years. And I've had it's been nice seeing a bit more about closure and it's great to be with you guys here today. So as as Renzo mentioned, I've been involved while I've been involved in language design for a long time. The first big language I designed was in 1979, a language called SMP, symbolic manipulation program, which was kind of a forerunner of Mathematica. Mathematica came out in 1988. Mathematica is basically Wolfram language. Wolfram language is kind of its modern name. And I've been interested in language design all that time. In fact, I've been doing language design pretty much every day for the last 40 years. And in the last few years, I've even been live streaming many of the language design meetings that we've been having so people might find those interesting. The thing that how come I've been doing language design for all those 40 years? One might think when one thinks about typical programming languages, well, they're small things. And after a year or two, a few years, it's like the main design is done. But actually our goal has been very different from that. Our goal has been not to make a traditional programming language, but more to make what I call a computational language. And I'll try and explain a bit about what I mean by that. I would also say that in terms of the history, when I started designing SMP, I did it, first of all, because I wanted to have a tool to use to do physics, which I was very interested in at that time. And I wanted to make a system that was as general as possible. And my inspiration in terms of making a computer language was to use ideas from natural science. And natural science, one has all these phenomena in the world. And the goal of natural science tends to be to drill down and find out what are the underlying essential primitives that make up all those things we see in the world. And that was kind of my approach to designing a language, was look at all the computations one might want to do and kind of drill down and find out what are the primitives that are most effective in defining those computations and then be able to build up from those primitives. And when I started doing that design process back in 1979 or so, my main inspiration was to, well, to try to find sort of what was the way of thinking about computation sort of in terms of its essential primitives. And I kind of went back and I'd studied a lot about mathematical logic. And in fact, many of the same inspirations that John McCarthy had in the design of LISP, whether it's from Lambda Calculus or production systems or combinators, whatever else were also influences on me. The other big influence on me was APL. Those were the sort of two big language influences as on me. But so what is the point of a computational language? The point of a computational language as far as I'm concerned is to have a way to represent as many things in the world as possible computationally. Little different from the objective of a programming language. A programming language, the story is we've got a computer and it does certain things. Let's try to make sure that we can tell the computer as effectively as possible, even when we want it to do lots of things, what it should do. A computational language, the goal is ultimately to be able to represent things in the world and things that we care about thinking about in computational terms, both as a way to communicate those things to a computer and as a way to understand those things for ourselves. I kind of view it a little bit like the objective that mathematical notation had starting maybe 400 years ago or so now, mathematics had been something you described in words, then there started to be streamlined notation like plus signs and equals signs and so on. And that streamlined notation was pretty important for the launching of what became the mathematical sciences. And what I see the goal of our computational language today as being as this kind of a notation for computation that can be used to kind of launch the computational X fields in the same kind of way that mathematical notation launched the kind of mathematical science kinds of fields. So one of the things that's very different between programming language and computational language is in a computational language, we want to have as much knowledge as possible about the world. It's not just a question of knowing what, how one's computer is supposed to do things. It's also a question of knowing lots of things about the world. Well, maybe without further ado, I should actually start showing some things. So let's, this is a notebook. We invented this notebook idea back in 1987, when just before Mathematica version one came out. I guess notebooks have now become finally popular. I view that as being one of the most trivial ideas in what we built in Wolfram language. But so what is this language? Well, the one thing to understand about it is it's symbolic. So I type in X, it's just X. I type F of X, oops, I type, let's say F of X, it's just F of X. I can do things I could say, for example, nest list F X 10, I'll get some sequence of nested things. It's all just symbolic. I could say, if I wanted to, I could have something which immediately means something I could say, use the function framed and I'll get something that looks like that. Let's, but one of the things that's important about having this symbolic language is that that X can be anything. It can represent whatever I want it to represent. I could have it, for example, let's say I make a graph. Let's say I make a random graph, 100 nodes, 200 edges, there's my graph. Now I could say something like make a community graph plot from that graph. Now notice I can just put that graph in as an argument of that function because it's just that graph is just a symbolic thing like the X was. Oh, for example, I could say do something like this. I could take, let me pick up an image here. That's some, let's get a little image there. And that's some, there's my image. And I could say something like edge detect that image. And now I'll get the edge detection of that image. Or for example, what I could do is I could say, nest list, edge detect. And I could just take that there and let's say, I don't know, 10 steps or something. And now I've got to that big mess of edge detected images. So I'm just using that kind of mechanism there. So another thing to understand, I mean, I could as well, I could say, if I want to have a function, I could say, nest list, here's a pure function. There are many different notations for pure functions. But let's say I have a function, I don't know what would I do? Let's say X squared mod 100 or something. And then let's do that. Actually, what I want to do, I don't know if this is going to be very interesting. Let's start this off from three or something. Let's do that 30 times. Okay, this is totally boring. I was thinking I would make, okay, I know what I might do. I could, for example, I could make something that goes, no, this is going to be boring. Okay, that was very boring. I could say, find transient repeat. Oops, I want to say percent just means the most recent output. But I could say, for example, say that and that will just find the repeats in there. Okay, but maybe I could do, okay, let me do something different here. Let's say make a table. Let's say I goes to, let's do the thing I was thinking about back there. Something like this. I up to 100. So that will make a set of rules that say two goes to four, three goes to nine, et cetera. I could make that into a graph if I want to by just saying make a graph of that. There's the graph. Now, for example, let's say I want to make this N. I want to make that a variable. Let's say I could say that's now a symbolic expression where I could say manipulate that. N goes from let's say 10 to 200 in steps of one. And now I'll get something which is a dynamic thing where I'm generating that graph layout for each value of N. Okay, so let me, let me save this just so we might have a chance to, where do I have it? There we go. I'll push this to the cloud so people can play with it themselves. I should say what I'm doing, I'm running here a desktop version of Wolfram Language. There's also a cloud version and I could be running this notebook just in a web browser in the cloud. I'm just doing it locally because it's convenient to do that. Okay, so let's, so actually, you know what? Just for fun, let me push something to the cloud just as a, just what the heck? Let's say I want to cloud publish and for example, let me make an API that will take an integer. So let's say we make an API function that will be taken integer at N. It's, well, we'll say it's a type integer there. And let's say what we do here is let's compute one of those. Let's make that graph there. Let's say, so now this is going to be a pure function, anonymous function lambda, whatever you call it, using that thing that's called N, go. Okay, let's do this. All right, let's see what happens if I do this. So what this should do is, actually, let me not make it an API function, let me make it a form function just because it'll be a little bit easier to see what's going on. Okay, so what I did there was I made this be something that was now published to the cloud. So I got a URL there and I can get up. If I go to that URL, there I have an N there and I could type let's say 99 or something, press submit and hopefully there we go. We get a result. So that was just pushing that piece of that little fragment of code there that was now being pushed into the cloud and it's running against the Wolfram engine in the cloud. Okay, since we're talking about language design, let's talk a little bit about things like how would you define a function? So what we're dealing with, fundamentally the operation that Wolfram language does is it takes symbolic expressions and it makes transformations on those symbolic expressions. So for example, I could say, I could have an expression and I could say, let's see, is that okay? I could just say something like fib of one equals fib of two, oops, I'm typing to the wrong thing, equals fib of two equals one, okay? So now if I say what's fib of two, it'll say it's one, great. If I say what's fib of three, it'll say I don't know, it's just fib of three. It's just that symbolic thing, fib of three. Now, but now let's say that I want to tell it what's fib of any number, anything n, I could as well here say fib of, x to the fifth minus five or something, it'll just say I don't know what that is, it's just a symbolic thing. But now what I can do is I can say fib of, let's say n blank colon equals fib of n minus one plus fib of n minus two, oops. Okay, what does this mean? That blank just stands for any expression. So remember, all it's doing is it's making transformation rules for symbolic expressions. That blank stands for any expression, the n names that thing n and then it uses it on the right hand side. So if now I type, fib of 10 or something, I'll just get that result. If I wanted to memoize this, I would just type fib of n, that colon equals is a delayed assignment. I've just typed fib of n equals that, then I can do this and I could compute fib of 100 or something and it will be memorizing each of those steps. But so this idea that blank stands for anything, there's many generalizations of this, but let me clear fibs so that I don't make a big mess here. But let's say I say something like f of x blank, y blank, something like this, I could say that colon equals, let's say g of y or something, g of yx or something. Okay, what does that mean? That means if we have something which is of the form, let's say u comma five comma six, it'll say, I don't know what that is, it doesn't match. If I say u comma five comma five, it will use that, oops, did I do it the wrong way around, this should have been comma u. Then it will use that pattern, it will then use the transformation rule that we have defined for that pattern. Now I have to say that in my use of often language, I hate defining things like this, I much prefer to have everything be as immutable as possible. But this is that, and so I could perfectly well if I say something like, I don't know, I may come, but we can use this as well to let's say we say cases in let's make a table, that's a good example here. Let's make a table of, this isn't gonna be so exciting. Well, no, it doesn't matter. We could say something like, let's make, here, let's make an array of primes. So here we'd say array prime 100, that will make an array of the first 100 primes. Just to understand what was, oh yeah, I should explain something else, it's important here. That thing prime is just a function like f, and we can say f of x, we can say f of x, f of x of y, we can do all kinds of things. We can have this head can be an arbitrary expression. If that head is prime, that is a known thing. So prime of six will be the sixth prime number. When I say this, it is equivalent to saying something like prime of hash sign, ampersand, that thing there is just a pure function, it's just a lambda. I could equally well write that out as function of x prime of x. Anything like that, or I could write it in that other notation, I could write it something like this, they're all the same. These are, so when I take something like this, I can then take that object, that is just a head, and I could say, f equals that if I wanted to, but I can just take that head and apply it to some number here. And now I'll get the 666 prime. Okay, so that's a little bit on the kind of, on the language, the sort of the big story of this language is that there are about 7,000 of these built-in primitives, things like prime that try to capture not only kind of computational kinds of things, but things about the world. So for example, let's say we could go ahead, we could say, I don't know, give me a list of words, for example. And so this will by default, we'll just give us a list of words in English, and we could say something like, let's just take the first letter of every word in the words in English, there we get that result. And now let's say something like word cloud of that, and then we will get a word cloud which will show us the relative frequencies of words in English. And if we want to be a little bit more on eight, we could say instead of English, we could say, we could pick another language here, we'll have data on lots of languages. I don't know, let's pick Russian, for example. And might have to, okay, there we go. So there's the corresponding result for Russian. Okay, so knowing about the world, how do we, what kinds of things do we know about the world? Well, let's look at something else. So this idea of symbolic expressions, we can use that to represent real things in the world, not just X or some polynomial, we could work out some symbolic integral where we have, where the X is standing for an algebraic variable, but the X as well can stand for some real thing in the world. Like it could stand for, I don't know, London, for example. Now what I'm doing here is I'm typing in that input in natural language. And that was a little sort of input device control equals that uses our Wolfram Alpha technology stack to do natural language understanding, to take the thing that we're entering as natural language and turn it into an entity. So this thing here is just a symbolic object. So for example, if I say, what is it, what is it actually made of? It's an entity of type city, London, Greater London, United Kingdom. Okay, but this entity is something that we know things about. So for example, we could say London and we could say something like population here and now it should be able to tell us what's going on. There we go. It gives us a number of people. Maybe we could even say, if we wanted to, I don't know if this will work well, but let's see what happens. If I say, show us the population. I don't know how much data it will have on this. Okay, so it's got 27 data points. It gives us a time series. Now we can say date, list plot, that time series. Again, it's just a symbolic expression. I could open it up. I mean, any one of these symbolic expressions, just to be clear, this thing here is just a symbolic expression. And what is that symbolic expression? Well, we could, if we wanted to, we could say show that in full form. I'm just using a input device here slash slash just means apply the, just to show you how this works. There are, you know, when I say f of x, that's one notation. I can say f at x. That means exactly the same thing. I can say x slash slash f. That means exactly the same thing. It's just a convenient way to do this. The full form of that is just this thing that again is this collection of powers like, of expression like that. So for example, if we wanted to, we could say show us the tree form of that expression. And there it is. And that's kind of how the language thinks about any kind of expression. And again, this tree form is the thing on which pattern matching operates. And the whole operation of the language is you just take a collection of transformation rules defined by patterns and apply them and you just keep applying them until you reach a fixed point and that's the result. Maybe I'll talk a little bit more about the, all the subtleties of kind of the evaluation process. And that will segue into talking about the fundamental theory of physics actually. And we'll see how that comes out in a minute. But let's go back here. We had worked out the population of London. Let's see that was on line 48. And so now that's showing the population of London as a function of time based on the curated data that we have. And we've been collecting data about lots of kinds of things for many, many years now and have a huge amount of data including all sorts of real-time feeds of data and so on. But let's, okay, so let's do something else. Let's say something like capital cities in Europe. So now what we're doing is we're using natural language to basically have a quick way to enter something we could have entered precisely here but it's a convenient way to do that. So this will give us the countries in Europe as that is a symbolic object and then capital city, another symbolic object. There are a bunch of symbolic objects. What can we do with these symbolic objects? We could say something like geolist plot that will just give us a, that will show us where all those cities are. Or for example, we could say something like geoposition of, we could just go ahead and see what was that? That was, well, let's just take it again. We'll just say geolist plot, geoposition of all those things. And maybe we could actually, no, you know what? I can just take the original thing. Let me see if this works, I think it probably will. Let's see if I can do that. There we go. So what that's doing is it's solving the traveling salesman problem for that collection of cities. And it's showing us a tour. So now what I've got to do is take that list of cities and go ahead and I could do this more elegantly, perhaps with a nice pure function, but let's go ahead here and just say, just get that list of cities in the right order. And then let's say make another geolist plot and let's say we join the points there and there we'll have our traveling salesman tour of the cities of the capital cities of Europe. Okay, so lots of kinds of things that we can do that are sort of built in features of the language, we could equally well, let's do something that's more machine learning like, let's take, I think I had a picture up here. Let's, there's a good picture there. We've got a picture. So let's take that picture and let's say something like, let's just try image identify that picture and see what that gives, see how silly the result is. Or actually, perhaps more interestingly, let's, okay, that wasn't too exciting. Let's say we say something like facial features and again, it might need to download a classifier from the cloud to be able to use locally or may just crunch on it for a little while. And there we go. Okay, so sad. So that was just another, but let's ask the question, let's look at, for example, how some machine learning training might work. So let's say we say resource data, I think we can pick up, we've got a big data repository that has all sorts of useful kinds of data. This particular thing is probably the MNIST training set. So let's load that in here. Come on, there we go. Okay, so that's 100 and digits. That's 50, 60,000 of those things. So now let me take a random sample of, let's say of that, let's take a random sample of 1,000 of those and let's then, okay, there we go. And now let's take that and we've got a built-in function that's called classify that will try and build a classifier for those digits. So it'll go and it'll show us a little progress monitor but it'll now go and try and use sort of automated, it's kind of rather automated machine learning. So it'll try a bunch of different methods to find a good classifier based on that. And the thing that will get back as our classifier as a classifier function that is essentially just like a lambda that it's just another one of our pure functions that we can apply to things. Okay, there's our classifier function. Okay, let's go pick up a digit from here. Let's go pick up that digit there and let's just take that classifier function and apply it to that digit. And there, okay, it says it's zero, good. Okay, so that gives us some sense of how and we could do this with much crunchier kinds of things. Maybe it's interesting to see, if we just look at one of these, here we go. There's our image identification network. This is the raw network and we could go and we could drill down and we could see what's inside that network if we wanted to. We can also build it up with functional primitives or we could go ahead and let's just take just a few layers of the neural net let's say five layers of the neural net and apply that to, where was that picture? Let's apply that to this picture here, for example. And so that's just five layers of the neural net, we'll apply it and we'll get out, a, okay, we might need to try doing this. Let's make that a collection of images that slash at is map. Okay, so then we've got a collection of images that were what happened at layer five in that neural net. And for example, we could say, make a feature space plot of those and it'll try and lay those out. That wasn't very exciting. It will lay those out in a kind of feature space where it's showing us which things that things are connected to which other things. So, okay, well, this perhaps gives a sense of kinds of, there are many different kinds of data and things that we have. Let me try and give some sense of scope of the types of things that we deal with. Our goal, as I say, is to be able to represent sort of anything in the world computationally. And so there are many kinds of things to represent, things like geometry, things like text, being able to do natural language understanding. Well, we have both natural language understanding and here, let's try this. Let's try, we say something like Wikipedia data of, let's see, I wonder what happens if I type in control equals closure. I think we'll probably know, okay, closure, programming language, let's see what it knows about it. You can tell me whether it's right. So let's just have it disgorge a whole dataset of stuff that it knows, okay. So let's do this. Let's take that closure programming language entity and let's say, let's get Wikipedia data for it. So now we can go ahead and I hope and have it, there we go. Okay, so now there's some text. Maybe we want to take, how about we do the following? Let us try, we're living dangerously here. Let's say text cases in that of programming language. Let's see whether that, let's see whether it can extract natural, do natural language extraction from that. Oh, what do I want to hit? Oh, that's not what I want. Okay, thought that was what I wanted. Well, all right, let's try this. Okay, so now it's gonna try and scan that piece of text and give us all the instances, I hope, of programming languages. Oh, it probably had to load a classifier from the cloud here. It's probably still loading a classifier, there we go. Detecting entities, okay, good sign. Interpreting results. Let's see what it does here. Okay, there we go. So that's giving us, it was actually, we just told it to give us the strings for those things. So that now, so now just for fun, we can make a word cloud of the appearances of different languages and the Wikipedia entry for closure. Then, okay, what are some other kinds of things we can do? We can deal with things like, well, we can do all kinds of deep mathy kinds of things. We can deal with things like geometry. That's an interesting thing to see. Let me see, what can we do there? We could take, well, let's just, yeah, let's just do this. Let's say random real numbers. Let's say a bunch of random real numbers from, let's say, 60 random real numbers, 60 random triples of real numbers. And then let's say, con, let's try this. I wonder what this will look like. Concave hull mesh. Don't know what this will do. There we go, okay. So there's a 3D object that we made from those points and now we could do something we can say. Again, it's a symbolic thing. So we just pick this up and just feed it in to, let's see what happens. If we just feed that in to volume here and ask it, what's the volume of that thing? Oh, undefined. Maybe I want to ask it the surface area. Oh, there we go. Yes, it's because what we gave it that, because it's a mesh, it's a two-dimensional mesh so it just has a surface area. So that's that kind of thing. Oh, another thing we could do, let's just look at one more thing here. We could say something like a video. Let's say we have, let's see what I can find here. Let's take a sample video. Okay, I think I have one here. This is just, I mean, we could load. Okay, so now we've got a video, we can play this video. It's some video of some little critter here. Let's say, actually, if we just say video frame list, we could say of that, and let's say 10 frames, then we'll just extract 10 frames from the video. But we can also use the video as something that we map a function over. So let's say we say video map, and we have to say what we want out. So video map time series. And let's say we want, let's say we just want measurements, image measurements. We're gonna have to, it's a little bit tricky because it can also work on the audio and things. So let's get just the mean color out of this. And let's say that's a function. So then we want to map that function over the original video, which was on line 78. Actually, if I really wanted to show off, I would copy that video and stick it in there, but let's not do that. Okay, it's easier to just type that. Okay, let's see what this does. So that's now going to go and process the frames in the video. And it's doing that, it's doing all this sort of out of core stuff that it should do to do this sufficiently. It's processing the frames in that video. And what it should bring back here is a time series that will be the, it should be the three color channels. Okay, so that's the number of points. So now we can say something like date list plot. Whoops. Of that, and let's just, because we know it's red, green, blue, we can just say red, by the way, it's worth seeing that red, again, it's a symbolic thing. So we can just say we could evaluate it in place. It's just red, it's just that swatch of color, red, green, blue. Okay, let's look at that. What did I do here? It's not, oh, that's because I evaluated this, silly me. Okay, okay, there we go. Okay, so that is now giving us the amount of red, green and blue in the successive frames of that video. And also, I mean, we can do the same kind of thing with audio. If I just stand here and say something like, say audio capture, and I say, hello, can my computer understand me? Then I should get a piece of audio. And again, there's another symbolic object here that we can go and we can say something like, if we wanted to, we could say make a spectrogram of that object. And then it will show me the spectrogram of my voice there. But I could also just say speech recognize of that original thing that was on line 84 there. And now hopefully it will go and see whether it can understand what I was talking about there. Okay. So there's the result. All right, so anyway, so this is a little bit of a sketch of what Wolfram language can do. The, as I say, the objective is represent sort of everything in the world in a kind of symbolic computational way. And whether that's some data, we could pick up all kinds of other data. I don't know, there's all sorts of fun data we can. All right, let's just try one other thing. Let's say random entity, and then let's say movie. So we'll pick up, let's pick up, I don't know, 20 random movies, okay. And now we could say something like entered again, these are symbolic objects. So entity value that comma image, and this will be, should give us movie posters of the ones that are available. We could say something like delete missing here. And then we could say something like if we wanted to, we could say feature space plot or we could do image processing on these if we wanted to, but I showed you feature space plot before so we could try doing that. And that's not very exciting for these ones, but that's giving us some sense of where these images lie in feature space. Okay, so anyway, lots of things that Wolfman language can do. Also this whole interaction with the cloud and so on. Now, Wolfman language has its roots in the kernel of Wolfman language is just an engine that you can perfectly well access and principle you can access on the command line. What was happening there? Can I use a command line anymore? Yes, okay, there we go. So here I can as well type something like this in or I could say I could start typing in those entity objects and so on. And all of this will just work. Or I could say now that will give us a date object that would look more beautiful if you were using the notebook interface but you can still get everything in this kind of traditional the traditional kind of command line interface. This is actually what I'm running here is actually the free Wolfman engine for developers. That's a thing you can just download and launch. Okay, so what kinds of things can you do with Wolfman engine with Wolfman engine? Well, you can interact with it from other languages. Maybe I should show by the way something else in a notebook, I could actually interact with a bunch of external languages. Closure isn't yet one of these, it should be. We would love to get help in building that. But for example, I could interact with Python here and I could say I don't really know Python. So I don't know, I could just type something here. And the main point is that there's a translation of data structures. I don't actually know, is there a... Oh dear, I don't know anything but is there a function like that in Python? I don't know, it's about, oh yes there is. Okay, so notice that that came back, oh it's a zero origin language, okay. That came back as an actual Wolfman language list where I could go and I could say, that to the power six or whatever else. So that's calling an external language from within Wolfman language. We can also go and call from external languages into Wolfman language. So there's a well-developed, for example, Python client library. And one of the things that I really wanted to see happen as part of doing this keynote was I wanted to see if we could use this to stimulate the actual creation of a link from closure to Wolfman language so that you can call Wolfman language from within closure. And I'm happy to say that thanks to a bunch of work that was done, well, initially many years ago by Garth Sheldon Coulson and his contributors to a project called Clujeratica, but more recently by Christopher Small and Pavel Soranka and other people involved in re-closure, we've been able to produce something, they've been able to produce something which is a link from closure to Wolfman language. So let's try and see what we can do with it. And remember, this is, I'm operating in an alien world here so things may go horribly wrong. But let's start off with something like this. We can say something like evaluate and then I could say something, this is a familiar world for me. I can type in some just Wolfman language code in a string here and I can get, oh, what happened here? Help, help. Did I do the wrong thing? Do I have to, oh, no, this is, I see, I told you it's an alien world. There we go, is that gonna work? No, well, that's certainly not gonna work because it auto-completed that bracket. Okay, there we go, more like it. Okay, so let's say we have a, instead here, we just evaluate a Wolfman language string and let's say we have some kind of list here. Let's say we make an array of primes here. Let's make 20 primes there. Okay, there we go. So now we've got a closure list of those 20 primes. I bet that it's the case that if we make an association, let's see what happens here. Let us see, let's live extremely dangerously here and let's see whether we can do this. Let's try, oh, I don't know how to escape a, well, let's just see, this could be super dangerous. I'm gonna say word translation. Now I'm gonna guess that I can escape a quote like that. Let's hope. Let's say fish and let's say I wanted that in all languages and now what this is going to do is it's going to do something which, let's see what happens if I do this. This may or may not be a terrible idea. Oh, oh, well that was exciting. Now the only question is how do I display it? Oh, there we go, there we go, there we go. Okay, so this I think will be, it might be what we call an association, what I think in closure is called a map of, so this will be entity, something rather the translation for the word fish into Hebrew and Croatian and all these other different languages here. But okay, so more interestingly, so this is just using, just sending Wolfram language code as a string, but more interesting is that we've got this namespace, this Wolfram namespace loaded in and now all of the names, all those 7,000 names in Wolfram language, appear as direct names inside closure. So for example, now is this date object, again, in a notebook, this would be printed out a bit more beautifully, but this is the raw expression form of a date object. And for example, if we say something like, I don't know, let's try something like, well, let's try something, let's see what we want to do here. Let's say we say, well, let's do one of our prime things, let's say prime of that. And let's now evaluate that, that will give us a result. Let's, we could be more elaborate, we could say something like, let's try doing something that involves, well, let's see, we could try just doing something which says, which is map and then we could say something like, I wonder if this is gonna work, quote F that would be our symbolic F onto, and let's try this. Let's say, I'm typing commas, I shouldn't be typing commas, let's see that what happens if I do this. Okay, there we go. So that's now the symbolic result of that map in Wolfram language. And we could as well have used some other Wolfram language thing here. Okay, so let's see, we could try also doing some graphics. I believe there's a function called, see, I think it's called quick show. Does this auto complete? Yes, it does. Okay, quick show. And now we want to say something like, well, let's just be cheap here. And let's just say, let's just use the evaluate thing. Let's say we say, let's say graphics 3D of a dodecahedron. Dodecahedron. Actually, we could hook up the language silver protocol mechanism that we have for bringing our syntax into this. That would probably be a good thing to do. Okay, let's see what happens here. Now, if all goes according to plan, that will generate output. Okay, what is that doing? That will generate output, which I think will come up in another window if I can find that window. Hold on one second. Let's go on a window hunt here. Let me see. I wonder what it would be identified as in the task manager here. Well, it was coming up in a separate window. I'm sorry, that doesn't seem to be, I don't seem to be able to find that extra window. It's, oh, wait a minute, what's this? This might be it. But I wonder whether that, let's try running that again. Did it, fine, let's see. No, I don't know what that window is. Anyway, it was working beautifully to have the graphics be generated into this window here. So anyway, that's a little bit of a sketch. And I'm kind of hoping that you can think of Wolfram Language and the Wolfram Engine as just being a giant library that you can immediately call from Clojure here to get all of those capabilities that exist in Wolfram Language. And I think the fact that Clojure is sort of a list-based system that has sort of symbolic aspects to it and functional aspects to it will mean that this connection is one that's really quite rich and quite useful. So anyway, that was, so I wanted to make sure to show that and I really thank the folks who worked on getting this to go. And I should say that you can download from GitHub, you can download the current version of this connection and you can also download Wolfram Engine from our website and you can play around with this. Before I forget, let me just take this notebook here and before I go on and talk about some other things, let me take this notebook here and push it to the cloud. Let me just say publish to cloud here of that notebook and what should now happen is this will now, if I press publish here, this will now push that notebook to the cloud and we'll, and we can put this in, it's a bit of a big notebook I think, come on, wake up, do your thing. Okay, let's see, it'll now be pushing that to, and again, you can edit it and interact with it in the cloud and run all those interactive things in it too, at least if it manages to actually get around to pushing it successfully to the cloud, maybe I'll let it do that for a moment and then give you the cloud link for it. Actually, while it's doing that, let me just show you, well, of course it finished just as I was saying that, okay, there's a cute barcode for it, but let's forget that because we've got Zoom here, so let's just send this, I'm afraid I'm not reading the things that are in Zoom chat, but there's that notebook with a rather bad name, shocking, okay, but anyway, just for, we could just see that notebook if we wanted to, there it is, oh, it's probably still caching itself, there it is in the cloud, and I could go and say make your own copy, and then you can go off and start editing it and doing your own computations here. Okay, well, I wanted to talk about the second thing. I've shown you a bunch of practical things about computational language, kind of the aspiration is, describe everything in the world computationally, be able to have a language that it's possible for humans to read and work with as well as computers to understand, and I think that's an important thing because it's kind of the important bridge between sort of what is computationally possible and what we humans can understand and care about, and we think of it as sort of the computational notation, a bit like mathematical notation. All right, let me move to a sort of second segment here. I want to talk about basic science, and I want to talk about the relationship of basic science to thinking about computational language, and also perhaps quite directly to some questions about closure and issues about space and time and computation and so on. Okay, so let me, many years ago, I was interested in the question of sort of how do systems in nature work? What is the essence of what's going on in systems in nature? And many systems in nature look very complicated. We are, and one of the things that tended to happen is that sort of the standard mathematical methods that one uses to analyze things don't seem to work well for explaining, I don't know what the shape of a snowflake is or something like this. So I got interested in if one's going to be able to make models of things, maybe there's a generalization of the kinds of models that we're used to making from mathematics that we can use to study things in the natural world. And that got me interested in the question of what is the most general class of kind of models for things that one can make? And I realized that well, programs are a good kind of raw material for those models. So then I got interested in the question, very basic science question of what do simple programs actually typically do? If we make a program that's just like half a line of code long, what will it typically do? I'd now like to call this field Ruleology, the study of rules and what they do. But a particular kind of program that I got interested in are things called cellular automata. And these are, so let me show you a typical cellular automaton. So its rule is very simple rule. It just has a row of black and white cells and at every step you update the color of the center cell depending on its color on the previous step and the colors of its two neighbors. Okay, so given that, we can say what will that cellular automaton do? And so here we can say, let's use that particular rule, let's start it off from just one black cell, let's run it for 40 steps and let's show what the result is. Okay, so there's the result. We start off from just one black cell. We're following this particular rule at every cell at every step. That's what we get. Very simple rule, very simple behavior. And I'm gonna save this again into something called closure dash O2. And okay, so now the question is, well, what about other possible rules? Here's another possible rule. This is just using the same idea but a different detailed pattern of bits there. And you can think of this as a Boolean expression if you want to, I'm sure I can even, I wonder if I can actually translate that directly into a Boolean expression, I might be able to. Actually, I could just say Boolean function 90,3 and then I could say, there we go. And so if I just said ABC, that would be some, and let's say I say Boolean minimize of this there we go. So that particular rule, rule 90, that's, that must be a DNF form of that function. And any one of these things we could, if we wanted to, we could represent it as a little program like that. So the question, the sort of basic science question is, you just look in this computational universe of possible programs, what kinds of behavior do you find? It's like kind of turning a computational telescope out there and seeing what you see. So let's go try and take a look at that. Let's say N here, let's say 40 steps, let's do this. Let's get rid of that and let's make a table. N goes from let's say zero to 63. Okay, so this is kind of a basic computer experiment, just seeing what do these programs typically do. So some of them produce these nested patterns. There's an example of that. Some of them just do very simple things, but my all-time favorite is this one rule 30 and that's what it does. So let's take a look at that in a bit more detail. Let's say we run it for, run rule 30, let's say for 400 steps or something, rule 30 there, 400 steps and okay, this is what it does. So to me, this is a very remarkable phenomenon because we have a very simple rule, starts off from just one black cell and yet it makes all of this complexity. It's as if there's some mechanism here that in fact is the one we think nature uses, it's kind of the secret that nature uses to make all the kind of complexity it seems to make is in the computational universe of possible programs, it's actually very easy to go even from a very simple program to make what seems to be very complicated behavior. So for example, here over on the left-hand side, there's some regularity, but if you look at, for example, the center column of cells here, they seem for all practical purposes random, they're random enough that we've used them as the pseudo random generator for many years. So, and you can make many kind of scientific conclusions from this, this turns out to be a great source of raw material for making models of things in the world, these kinds of systems like cellular automata. There's some deeper kind of computational ideas that come out. For example, one question would be, what if you want to figure out what this will do in a billion steps? Well, sort of the traditional story in science is all you have to do is find a model for something. Once you found a model, you're pretty much done with the science. But so that would suggest that given the model, which is very simple in this case, just the simple rule 30 rule, we should be able to immediately say what will happen after a billion steps. But in fact, we can't. In fact, most likely, this is an example of a system that is computationally irreducible. What that means is the only way to find out what the system will do after a billion steps is essentially to run it for a billion steps and see what happens. This is kind of something closely related to undecidability. We ask, what will it do after an infinite time? That will be a question that we can't answer in any finite amount of time and so on. But this idea of computational irreducibility, this idea that there are computations where you can't kind of shortcut them is a very important idea that kind of gives one a lot of different intuition about how things work in the world. Here's an example of one, this is rule 110. It happens to only grow on one side. Let me run it for a few more steps. Let's run it for a couple of thousand steps. Oh, let's make it bigger. Okay, you might be able to see if it isn't too anti-aliased out. That there's some little structures here. Here, let me do this. Let me run rule 110 starting from random initial condition. Let's run it for 600 steps or something there. Oops. Okay, so here I've just run that with a random initial condition. You'll see it makes these little structures, these kind of like particle-like structures that are running around. And you might say, kind of looks like that thing is doing a computation. Well, it turns out that it is and you can show that in fact, you can make a universal computer out of the rule 110 cellular automaton. It's an example of where you can get sort of arbitrary computational sophistication from an extremely simple rule. But one thing you can also ask about rule 110 is, okay, it starts off from one black cell at the beginning. You see it generates all this complicated structure. You can ask, what will it do in the end? Well, if it was computationally reducible, you'd be able to just jump ahead and say, given that I know the rule, I can immediately say the result will be such and such. But because it's computationally irreducible, you can't do that. You basically just have to wait and see what it does. And eventually it will die out. This pattern will die out. One of the things that, well, there are many things to say about this. I've worked on these things for a long time. But one of the principles that's rather important is this thing I call the principle of computational equivalence. And the issue is, when you just look at simple programs or simple systems, and you ask how computationally capable are they? Well, something that just produces, let's say a simple nested pattern, we can say it's not very computationally capable. But one of the things that I concluded a long time ago is that there's a, but above the threshold where it doesn't look obviously simple, the behavior that the system will generate will correspond to a computation, which is as sophisticated as any computation that can be done. So in other words, what that means is it says things like, as soon as you get above some very simple, some very low threshold, essentially every system you look at will be capable of universal computation. So it will be capable. So in other words, from rule 110, if we could make a molecule that would execute the rule 110 cellular automaton, which is not a completely out of range kind of thing to imagine doing. And maybe I'll talk, maybe if I have a chance, I'll talk a little bit about using combinators and things and actually making molecules from those. But if you could make a molecule that would execute rule 110, you would have something that is computation universal. Okay. So the fact that we can get very simple rules, produce very complicated behavior, we can use a bunch of things like cellular automaton as kind of raw material to make practical models of things in the world, that leads one to a big question, which is, okay, we can see that very simple rules can produce very complicated behavior. What about this whole universe that we live in? Could it be that the whole universe is actually constructed from some very simple rule and we are just seeing all the consequences of the running of that particular simple rule? So I thought about this for a long time and a couple of years ago, we had sort of a breakthrough in thinking about that. And the result of that breakthrough is that at this point, I think we have really nailed understanding essentially what the machine code of the universe is like. And I'm not sure that I really have a chance to go through this in tremendous detail, but I'm happy to try and answer questions about it and things. Let me try and give you a sketch of this. And this project has gone just remarkably and outrageously well. I have to say, I had not thought that understanding the foundations of physics will be anything like as easy as it's turned out to be. Now, let me just mention something. As we understand the foundations of physics computationally, that will also allow us to understand the foundations of computation physically. And in fact, things about actual practical programming languages and distributed computing and so on, it looks as if we can get a great deal of inspiration and ideas from understanding, from leveraging the kind of the achievements of physics and applying them to computation now that we understand that computation underlies physics. But let me try and give you a little bit of a sketch of how our kind of model of how physics works is put together. And I will say that this model, I view as being kind of the underlying machine code of the universe. And there have been many other approaches from, I don't know, causal set theory to spin networks to string theory. We're not so sure about that one yet to a bunch of other kinds of approaches to physics. And one of the things that's been really remarkable is that it looks as if what we have built is kind of machine code that underlies all of those different approaches. It's not that we're right and they're wrong. It's that we are the kind of underlying machine code from which you can see all these things operating. It's, I view it as a little bit like what sort of happened in the early days of computation when people understood once you'd seen a Turing machine, you could realize that it was equivalent. There was a pretty explicit way to understand what was going on in computation and it turned out then to be equivalent to a bunch of other ideas people had had about how computation might work. Okay, so how does this get put together? So the first thing is, what's the universe made of? Well, at the first step, one might say the universe is laid out in space. Ever since Euclid, we thought about space as just a background kind of thing. We say, there's space and we can put things at a particular position in space or another position and so on, space is just this background kind of thing. Well, we might say, but one of the sort of key ideas of our project is space is actually made of something. It's a little bit like we might say if we have some water or something like that, we might say water flows around and we could pick any position in the water and it's all fine, just like we can pick any position in space. But in fact, we know that water is made of discrete molecules. And so we couldn't, in fact, pick any position. We might hit a molecule or we might not. Well, so one of the sort of foundational ideas is space is made of something. Space is made of what we can call atoms of space, discrete elements. These are not actual atoms of any kind. They are just pure elements. They are things, you can think of them as things with a UUID. They have their just abstract elements. And the only thing we know about these abstract elements is how they're related to each other. So for example, we might say that there's this sequence of three elements and they are in a relation with each other. And so when we do that, what we're building up is essentially a hypergraph. We're saying there are just all these elements, all these atoms of space, maybe in our universe right now, there are about 10 to the 400 of these atoms of space and all these atoms of space are arranged in a giant hypergraph. Okay, so what happens then is we go from this kind of, this discrete networks that represent, that correspond to all these atoms of space connected in this hypergraph. And then we look at the limits of that hypergraph when it gets very large. And one of the important things is that that limit behaves like ordinary space. It's much the same kind of thing as happens in a fluid where there are a bunch of discrete molecules bouncing around but on a large scale, it seems to behave like a continuum fluid and it's the same thing with space. So if you were to drill down to space down to a size of maybe 10 to the minus 100 meters, you would start seeing spaces actually made of discrete things. But to us at our usual scale of a meter or so, space seems completely continuous just as a fluid seems continuous. So okay, so that's kind of what space is made of. And it's not even obvious what the dimension of space would be. One can start understanding kind of as one, let me just show you how one can start sort of understanding the, let me see. I want that. Okay, so here's a typical network that might represent a very tiny piece of space. Here are maybe some other networks. Now, one of the things is that these networks can have a, we're not even defining the dimension of these networks. Sometimes we might have a network that could get laid out like this, which looks like a sort of two-dimensional thing that's curved, but we can start asking, how will we define the dimension of these networks? And we can do that just in terms of the kind of discrete structure. We can say we started a point, we just build this kind of ball by progressively going more and more steps away from that point in the network. And if we say that the number of nodes that we reach is about R to the D, then that exponent D characterizes the dimension of the space. And there's a correction term to that that characterizes the curvature of the space. And that correction term, the sort of the dynamics of that correction term gives one the structure of space and gives one things like gravity. Now, let me explain something about, so we've got this network. The question is, how does this network, what happens? What does it progress in time, for example? And the idea is that there's just a rewrite rule. It's very much like the expression rewrite rules that I talked about in Wolfram language, except now it's operating on elements of a hypergraph. And so here, we're just saying, we have this very simple rule, we just apply that rule, and we apply it wherever we can in this hypergraph. And what we'll get, for example, in that case, we might get a sequence of configurations of the graph, this is the sequence of configurations. This might be the very beginning of the universe here. And then it's progressing, and by the time we've reached today, it's many, many steps forward, and it's a big thing that behaves like space as we currently see it. Okay, so essentially what happens is, there's time in our models is something rather different from space. Time is this kind of inexorable progress of computation of the progressive rewriting of this hypergraph. Space is kind of the extent of the hypergraph. So then one question is, well, how do things like relativity arise? It turns out the important thing to understand is that as entities kind of existing in this universe, there are only certain aspects of what happens that we can be sensitive to. In particular, we can think of every rewrite here as being an event. And that event, that rewrite, is effectively taking certain inputs, it's like a function. The rewrite is like a function that's taking certain inputs. It's taking certain hyperedges and it's then rewriting those hyperedges and producing other hyperedges. It's just like the application of a function. And so one question is, what are the causal relationships between different function applications? And so we can build up a causal graph that represents the causal relationship between updating events. And it turns out that as entities embedded within this network, within the system, the only thing we can ever be sensitive to is that causal graph, not the actual arrangement of this hypergraph, but only just the causal graph of what event affects what other event. What event, for example, in us is affected by what other event out there in the universe. And so when you construct that causal graph, you, that's what you're sensitive to. There's one other thing. In many of these rules, there's a property we call causal invariance, which is a generalization of the property of confluence and term rewriting systems. And the point of that is that it means that there are, well, actually I should explain some other things before I get to that. Let me just say that what you end up doing in this causal graph is you end up saying, in the causal graph, you've just got a bunch of events. The causal graph defines a partially ordered set of events. And then what you want to do, if you are trying to, as an observer of that causal graph, you're trying to make sense of it, you want to say what correspond to the successive moments in time? Or in other words, what correspond to the possible events that could be happening simultaneously in time? And you are then building up reference frames by foliating, by slicing this causal graph. And so those reference frames are essentially defining which computations, which events can you think of as happening in parallel and which events are forced to happen sequentially. In the language of physics, what you're asking about is what events occur in a time-like sequence that is one comes after the other in time and what events can be space-like separated in the sense that they can be at different places in space at the same time. So one of the things that comes out here is that you start thinking about these different reference frames, picking different choices of space-like hypersurfaces that represent the things that are simultaneous at successive moments in time. And you start imagining the universe, you can sort of think about programming in the universe and you can think about, well, what reference frame am I going to pick to understand what's going on in the universe? And that story of what reference frame I'm going to pick is kind of a story of relativity and the equivalence between different reference frames turns out to be what leads to relativistic variance. I'll show you just one thing maybe that's sort of a simplified version of that. See if I can find it, here we go. So this is a very simplified version. This is not a hypergraph, this is just a string rewriting system and it's a very simple string rewriting system. It rewrites every string BA to AB. And so what it's going to do is it's progressively going to sort this string. And these are some events, this is one possible collection of events that could be occurring to sort that string. And what we can do is to look at different possible, let's see, where do I have an example here? Yeah, we can look at essentially different possible reference frames in which we decide when these events happen. So here we're saying all events that could happen at the same time happen at the same time and we progressively sort the string quite rapidly. If now we go into a different reference frame which corresponds essentially to a moving reference frame where we think of things as being sort of moving across space as well as progressing in time. We can also, we will also succeed in sorting the string but we will do so in more time steps. And this is the fundamental thing that leads to time dilation and relativity. It's the fact that you are using, you have a certain amount of computational resources. You can either use those computational resources to progressively evaluate something at one place in time or you can use some of those computational resources to essentially rearrange yourself at a different place in space to move in space. And if you do that, you are using some of the computational resources for your motion. And so you have less computational resources to evolve in time. And the result is that time appears to run more slowly and it takes longer in effect to get to the same result. Now, by the way, I think that's something very similar to that. You can think about things even transducers, I think, enclosure and so on. You can think about kind of this notion of relativistic transformations on sort of the way that you operate on data structures and with some of these same kinds of effects of time dilation happening there. All right, let's come back to physics for a minute and let me see if I can give you a quick summary of some of the things that we figured out here. So one of the big results is that it is a generic fact that when you have these rewrite rules operating on hypergraphs with certain conditions, you end up getting the equations of general relativity, the Einstein equations, for the overall structure of the space time that is the continuum limit of these hypergraphs. And it's very much the same kind of thing as when you start off a molecular dynamics in a fluid and you say what is the continuum behavior of the fluid you can derive the equations of fluid mechanics. So here you can derive the equations of space time. And there are all kinds of things you can understand about how gravity works, how black holes work, all sorts of things like that directly from the structure of these systems. Just to mention one thing about black holes, black holes have event horizons. Event horizons are places where the causal graph disconnects which means that the causation, there is no causation from inside the event horizon to the outside. So there is a disconnection in the causal graph. At the center of the simplest kind of black holes, partial black hole, you find that one of the features of standard on razzudi is that it says that essentially there's a space time singularity at the center of black hole. In the case of these systems, which can be thought of as kind of term rewriting type systems, that singularity at the center of the black hole is a place where time stops. What does it mean that time stops? Well, what it means is most of the time you've been merrily going along, rewriting your hypergraph according to certain update rules. But when you reach that singularity, what happens is you've reached a normal form. You can no longer do any updates. And when you can no longer do any updates, that means time has stopped. And so that space time singularity is associated with reaching a fixed point. It's sort of getting the results of your computation. The result is the thing that's at the center of a black hole and that's the answer. Now, most of what happens in the universe never reaches an answer. Most of what happens in the universe is just an ongoing computation that never stops. You can pick these reference frames to kind of get a snapshot of what the structure of the universe is at a particular time, but the computation is gonna keep going. I should mention, by the way, that one of the important features of this model is that the only thing in the universe is space. And space is made of this hypergraph and all the things that we care about, all the particles and all those kinds of things, those are just features of that hypergraph. They're a little bit like in a fluid, you might have a vortex which kind of persists and moves across the fluid. So similarly here, you might have a particle like an electron that is some topological feature effectively of this hypergraph that can move more or less unchanged across the hypergraph. So one of the things you can ask is, well, what fraction of the activity of the universe is involved in just knitting together the structure of space and what fraction is all the stuff we care about about electrons and quarks and all those kinds of things? Well, a rough estimate is maybe one part and 10 to the 120 of the activity of the universe is involved in all the things we care about. The vast majority of the activity of the universe is merely concerned with the knitting together of the structure of space. Okay, let me mention one more thing here, which is we talked about these applying these rewrites to these hypergraphs. One question is, where do you do that? We say we apply the rewrite wherever we can apply the rewrite. Well, there may be many places we could apply the rewrite. There may even be overlapping places where we could apply the rewrite. Which actual rewrite do we apply and when? So this is sort of a key idea that there isn't just one form of, there isn't just one history for the universe. Instead, each one of the possible sequences of rewrite applications corresponds to a possible history for the universe. So we make a thing we call a multi-way graph which represents the possible sequences of rewrites that can be done on the universe. And so this is a version of that multi-way graph also showing its multi-way causal graph. But this is basically showing how the universe might start in one state and then it ends up in three different possible states and then maybe some of those states merge and so on. This makes this multi-way graph. Okay, what does this multi-way graph correspond to in terms of known physics? Well, it corresponds to quantum mechanics. Because the key idea of quantum mechanics is it's distinguished from in classical mechanics, you throw a ball, it goes in a definite trajectory. In quantum mechanics, the ball follows many possible paths and we only get to see the probabilities for different paths. Well, here what's happening is that these are the many paths and what's happening is that we as observers embedded within the system are essentially looking at collections of these paths. We are essentially conflating together in the language of automated theorem proving we're doing completions on this kind of term rewriting system to be able to sort of make a view of what's actually happening in the universe. In any case, the end result of this is that the sort of the branching and merging of this multi-way graph gives one quantum mechanics. And that's kind of a nice way to think about it which is when we talk about this hypergraph we're talking about the extent of the hypergraph corresponding to the extent of physical space. In this multi-way graph, we can take sort of slices at particular times and we can say what is the kind of pattern of connection between branches? We can say if we look at any given pair of states here do they have a common ancestor one step back or not? We can make what we call a branch shield graph that is a map of the sort of entanglements of those states associated with their common ancestry. And this branch shield graph defines another kind of space we call branch shield space. That's a kind of space in which it's not ordinary space it's a space of what turned out to be different quantum states. And it turns out that what in physical space corresponds to the effect of gravity in branch shield space is precisely how quantum mechanics works. The effect of energy momentum on the motion of particles that the fact that mass curves space time is equivalent to an effect here where essentially the presence of energy essentially curves the kind of the branch shield space that exists from this multi-way graph. And so it turns out that general activity and quantum mechanics turn out to be the same theory except general activity is a physical space and quantum mechanics is in branch shield space. So anyway, this is so there's sort of a big story of how that works and how that kind of gives one physics. Maybe I'll just mention, well, all right, let me let me mention a couple of things that are that are perhaps go even further than this. So one question is, let's say we've got this picture and we can save from a particular rule we can reproduce something that is like physics. So one question we might ask is why this particular rule are not another one? Doesn't that seem weird that we would have got rule number 714 or something for our universe? What about all the other rules? Okay, this is where things get quite funky. The answer is there is actually a way of generalizing this multi-way system and saying not just looking at where are all the possible places we can apply a particular rule but where are all the possible rules we could get? What are all the possible rules we could conceivably apply? And what we get then is the thing we call the RULEAL multi-way system which says at these branches we're not just applying the same rule in all possible places but we're applying all possible rules in all possible places. And the limit of that is this thing that I call the RULEAD which is this entangled limit of all possible computations. So what one effectively has is something where imagine, you could think about it in terms of Turing machines I maybe show you some pictures of that, that you're looking at sort of all possible Turing machines running in all possible ways. You might say, well, how could you conclude anything if you had all possible Turing machines running in all possible ways? Let me just show you one picture there. Oh, where is this? Oh, well, let's see. Ag, sorry about this. Here we go. Okay, so that's a picture that shows this multi-way graph effectively for all possible, well, we're looking at Turing machines here and that's an ordinary Turing machine just running. That's a multi-way Turing machine that has many possible, several possible outputs from any given state. And we can make a multi-way graph from all possible Turing machines, all possible Turing machine rules. And that's the beginning of that rural multi-way graph. We can keep going and we can look at the full version of the sort of graph of all possible outcomes from the Turing machine. And there it is, basically this is related to the P versus NP problem. The red thing is the deterministic computations. The gray thing is all possible computations. But in any case, this is basically showing the very beginning of the structure of this Rulliad, the structure of the entangled collection of all possible computations. The reason it's non-trivial is that even two completely different Turing machines might end up evolving to the same state. And so this thing doesn't just go off in all possible directions, it makes this kind of complicated entangled structure. So in any case, the understanding of what happens in our universe is that what we're seeing is slices of that Rulliad. We're seeing something where we are essentially sampling some piece of that entangled structure of all possible computational rules. And essentially what's happening is that we as observers of this system, we are making certain choices about how we observe the system. Just like in relativity or something, we might pick a reference frame, moving at this speed. We might say we're at this particular position in physical space and so on. We're similarly making these choices within this Rulliad, within this thing that represents all possible entangled computations. We are a certain position in essentially physical space. We're also at a certain position in Rulliad space in this Rulliad object. And so what you end up realizing is that the, you can, this gets a bit deep, but the end result of this is that you can see that it is a feature of the way that we observe the universe, the way that we observe this Rulliad, this collection of all possible entangled computations. It's because of the way that we observe it that we conclude that the laws of physics are the way they are. In particular, the fact that we are computationally bounded observers and also we are observers who believe that we are persistent. That is, even though at successive moments in time, we are made from different atoms of space, we still have the point of view that we are persisting through time. And similarly, when we move around, when we walk around, every time we walk to a different place, we're made of different atoms of space. But yet we have the point of view that we are maintaining our coherence as we walk around, so to speak. And those features are basically what feedback to end up giving one, and this is the surprising thing, giving one the precise features of physics as we know them, of general relativity and quantum mechanics and so on. And that, well, there are many conclusions from this, but okay, I'll just mention one last thing, which is the, well, one consequence of this is it finally gives one a way to understand why the universe exists. It also gives one a way to understand the relationship between physics and mathematics because essentially what's happening is we're describing this Rouliat as this entangled limit of all possible computations. That is the same description that we might give of all possible mathematics. If we think of mathematics as something built from axioms, we can say, well, we'll pick all possible axiomatic systems and look at their consequences. Well, the thing we'll get from looking at all possible axiomatic systems and their consequences is this exact same Rouliat object. And so essentially what's happening is that mathematics is a particular view of the Rouliat done made by a mathematical observer, and physics is a particular view of the Rouliat made by a physical observer. And so for example, one of the things I've just been working on, this is kind of hot off the press so to speak, is trying to make a physicalized model of mathematics. That is if we imagine the set of all possible statements that can be made in mathematics, there are maybe three million theorems that have been written down in the literature of mathematics. If we look at all possible statements that can be made in mathematics and we think of them and we kind of think what is the continuum limit of mathematics? What will mathematics look like when there are trillions of statements or more that have been made in mathematics? What is the overall structure of metamathematical space? And it turns out that one can make many conclusions about that from what we understand in physics. And in fact, probably the biggest conclusion is that if you think about automated theorem proving and kind of grinding mathematics down to this kind of very symbolic level, very low level, that's kind of like looking at the molecular dynamics of a fluid. It's grinding it down to a point way below the point that we humans usually look at it. And the thing that is the basic result seems to be that it is in fact the case that there is kind of a fluid dynamics analog for metamathematics. There is a higher level description that is persistent. There are kind of the analog of vortices in mathematics might be the concept of integers, for example, and that that concept is something that is persistent as kind of you apply the process of mathematical proof. And so that's kind of a correspondence between physics and mathematics. All right, that got really abstract. But let me maybe finish by saying that a couple of things. Firstly, the actual process of evolving, doing simulations in our physics project, I have rather a suspicion that some of closure's capabilities might allow us to do some rather efficient things in terms of the actual process of evolving hypergraphs and where little pieces of the hypergraph are effectively left unchanged, other pieces are changing. It's kind of a question of sort of keeping track of what's changing and what's not. And what we've learned is that some of that can be we have kind of a bigger theory of that based on physics. And so that both suggests that perhaps there are ways to use kind of the things that might even exist in closure to do with distributed computing, to do efficient simulations, but also to use the ideas that we've got from physics, from this correspondence between computation and physics, to understand more about how to think about distributed computation. One of the things I've been actively working on is what I call multi computation, which is kind of a different paradigm for making models of things and a different paradigm to thinking about distributed computing that's essentially leveraging the success that physics has had in describing sort of large scale, what we now know are computations. All right, I should stop there. I'm sorry I've gone way over time, but I'm happy to have whatever discussion people want at this point. So thanks very much.