 Thank you. Thank you very much, Steven. So there's an amazing amount of topics there for discussion. The incredible features of the Wolfram Language, the closure library that enabled us to use that power, all the way up to the model to explain the universe, black holes, relativity, a lot of stuff. So it will take a lot to decompress. Jordan, are you here with us as well to help? Hi, yeah, I'm here. That was a wonderful talk. Thank you so much. Thank you. So we accumulated a long list of questions. We want to start with the first computer science question. Yes, so we have the first computer science question here. Mathematica and Wolfram Language pioneered some ideas that are only now being picked up by the programming mainstream. One prominent example is the use of computational notebook format for interactive development. What would be the one idea from Wolfram Language that has been overlooked thus far, but you think it would be really beneficial for users of more mainstream languages? Well, you know, to me, the fact that it took 25 years for people to understand the idea of notebooks is kind of mind blowing because that to me was the simplest of hundreds of ideas that we had in the creation of Wolfram Language originally. I would say that the biggest idea is this idea of symbolic programming, this idea that everything is a symbolic expression and you can manipulate things that way. That's the biggest sort of programming structure idea. The biggest kind of meta idea is this idea of making a computational language, not a programming language. And that's an idea that in a sense, we've been building this tower now for basically 40 years and it's interesting because it's something where it's not like anybody else is building another tower that's like it. We are the unique such tower and that has both the good feature that we're the unique such tower and we can livestream our design reviews and we're not worried about anybody stealing our ideas and so on. But on the other hand, it means that it's not trivial to explain what it is that we have. I think, you know, we've been, we just Mathematica just had its one third century anniversary, so to speak. And I realized that Mathematica has existed Wolfram Language has existed for half the time that that kind of, you know, production electronic computers have existed it's a long time. And we're still at the rate we're going it's going to be 50 or 100 years before people understand the next level of concepts there. I think it is absolutely inexorable and inevitable that computational language is the way that people will think about interacting with computers. It's surprising we're not there yet now, having said that you describe us as not a mainstream programming language, we're not really a programming language so we're not, we're not. I think we're, we're a mainstream system for people, computing things, we're not currently viewed as a mainstream language for people doing just programming. I would like to think that in the future, the that just programming isn't really a thing that people will mostly think about doing. They'll think about achieving things computationally and I think my vision is, you know, when I started using computers, computers didn't have operating systems built in. It was just, you know, you and the raw computer. Gradually over time computers got operating systems they got networking they got user interfaces. There are sequence of things that you can kind of take for granted when you walk up to a computer. This idea of having the knowledge of the world built into your computer and a computational language for interfacing with that. That's something that in time eventually people will take for granted for any kind of computer. I might say that that you know there's some gradual progress in seeing that be more mainstream. For example, if you go to Excel today, you'll find a data tab and Excel. If you pull that down you'll find a bunch of Wolfram data types there where you can start making use of our, at least our data soon hopefully also computational capabilities directly within any any copy of Excel. So that's kind of an example of a piece of sort of fairly obvious mainstreaming that that's happening. But yeah, no so symbolic expression symbolic programming. That's probably the thing that has been the most not not absorbed. I think that it's worth understanding, you know, combinators had their 100th anniversary at the end of last year. I made a big study of combinators at that time. I think that old Moses Schoenfinkel back in 1920 had already figured out a lot of ideas about symbolic programming. Unfortunately, it's, you know, 100 years later people still don't really understand a lot of those ideas. And, you know, it's a slow process, but we're getting there. So today we are going to go with a Wolfram language and more business question by Yakub on Discord. So he said, I build customer facing business systems web shop data management systems. And I find that long the Wolfram language fascinating. Thanks for the symbolic operation and making everything accessible in the language. So the natural question is, will I be able to build my apps in Wolfram language. There are lots of big systems that have been built in Wolfram language. I mean, the, you know, Wolf mouth is one that we're very familiar with because we built it ourselves. And it's the thing that powers the knowledge system in Syria and things like that. And you'll find a lot of large companies have systems running that are Wolfram language systems that a customer facing systems. I think there are a bunch of bunch of fortune 50 companies have large systems running that have Wolfram language back ends. And the, the, the typical model there is, well it's either running a raw Wolfram engine. Okay, so there are many different deployment channels for Wolfram language. The thing I was showing you was just the desktop version. There's Wolfram engine, which is a standalone thing. There's a thing called Wolfram application server, which is the thing that supports API is running against a containerized system that you can run on on on your own infrastructure. There's also the Wolfram cloud we have a public version of that we also have a private version of that called enterprise private cloud, that is something that supports both APIs and notebooks on a either in the cloud. But those are kind of deployment methods I would say that the enterprise private cloud and Wolfram application server probably the two most popular for deploying kind of enterprise kind of applications. And by the way I should say that with the, with the closure link that we were, we were just showing that will all just work with Wolfram application server enterprise private cloud or Wolf mention. So you can you can build those two, two things together. Awesome. It looks like we have a hand raise here, Sebastian crane. Thank you so much for raising your hand. Are you here with us. Yes, I am. Thank you very much. Yeah, so if I recall your explanation of the visual summary earlier. So that's the Wolfram. Yeah, there's that multi way causal graph on the right the red one that shows if I understand exactly the areas of the spatial hypergraph that can update independently. Roughly. Am I right. Not quite. Okay, keep going though. And I was thinking that in classical physics. If you have a certain point and want to calculate the gravity that applies to it. That's a function of every other thing in the universe. And not every other thing in the universe only the things in the past light cone of that point. In other words, only those things which from which a light signal could have reached that point from those other points in the lifetime of the universe or whatever. I see. Okay, so in that case there are still in the perceivable universe, everything has applied to that the gravity for that particle. So that makes me think. Yeah. The multi way causal graph presumably doesn't map onto physical space, because I can't say that a single particle is gravity is updated independently from the one next to it. Hold on, hold on, many, many layers. There are many layers. Okay, first point, what is gravity, gravity. In the absence of gravity. If you shoot a laser in some direction it will go in a straight line and the line will be genuinely straight. The presence of gravity is represented by a curvature in space time which means that the shortest path is no longer a genuine straight line the shortest path is curved because because the space time is just like if you if you were on the surface of a sphere, the shortest distance between two points is not an ordinary straight line. It's a great circle path on the sphere. So what's happening in our models is the presence. Okay, there's quite a few levels to this, but the, okay, first thing is this notion of JD six shortest paths is rather straightforward to understand in hypergraphs. You literally are taking two nodes, and you're asking, what is the shortest path in the hypergraph between those two nodes that defines a JD sick. Then the question is, what is, for example, what is mass and energy in these hypergraphs. It turns out that more or less energy is the amount of activity in the hypergraph. You have to be a little bit more careful it's the flux of causal edges through space like hypersurfaces. It's sort of the density of activity, the density of updates in a particular place in the hypergraph. What's happening is the effect of gravity is that what would otherwise be the the presence of those updates kind of inexorably causes the JD sick paths the straight lines to be to be curved. I mean this is not obvious this is a bunch of math derivation to show that that's that's how it works. But that's that's essentially how gravity arises and these models is that what would otherwise be sort of where straight lines where the paths are straight lines shortest paths are curved because of the presence of these update events, which changes the structure of the hypergraph. Now, how that relates to I mean this is all kind of complicated and it's about a solid hundred pages of of mathy stuff to kind of go through the full story of this but but roughly the relationship and again it's kind of complicated because the causal graph is a space time causal graph those events are particular. Eventually you can think of them as being a particular positions in space and time, although remember the whole thing is defined just by graphs so there is no intrinsic set of coordinates, the coordinates are merely defined by the connections between things. And then for example space ends up being a slice through this causal graph. And so you have to. Yeah, this is this kind of complicated. It's some I don't think I can do justice to it and in a few moments here but but roughly. Even when you talk about particles. The notion of a particle is a complicated thing. And, in fact, something that we are hoping to be able to do in the next year or two is to actually understand how particles work in our system. Amazingly and somewhat surprisingly to me we've been able to understand things about energy and quantum mechanics and quantum field theory and so on, without being able to identify. This is the particular topological defect that corresponds to this kind of particle or that kind of particle. So that's a yet different level of stuff, discussing the effect of gravity on particles, and it's sort of a consequence of this general issue of gd six in the in the hypergraph, but it's a more complicated issue. So this is, I, let's see, I can, I can you can find on the web tons of details but if you as a big book that I put out which is kind of at least the early documents about the theory of physics. And I, I encourage you I think there's a technical introduction there that I hope is quite readable that that tries to go into a bunch of these things. Yeah, thank you very much. I think that that idea of the, like the density of changes in a particular area being energy makes a lot of sense. So, I suppose my critical question was that it seems as if there are parts of that graph that in the update independently, and those are have, do they have physical representations that are independent. If they're truly independent that means there's an event horizon. Right, I see. So that that's a completely different space in which, well, it's a completely different area in which gravity applies differently. Right, so for example, inside the event horizon of a black hole, you can have. Yeah, I mean it's it's a quite separate thing I mean the way that. Okay, so in ordinary general in terms of flexibility, you start off with a four dimensional manifold that represents space time. And there is a limit to what you can do with a manifold continuous manifold you can't for example change its topology in any continuous way a manifold, you without like separately from the outside tearing the manifold you can't change its topology in our models, because there's this underlying structure that is discrete, you can have changes and topology and you can have much more exotic kinds of structures than pure about black holes. For example, one of the ones that we're really interested to go looking for is dimension fluctuations in the universe so we think the universe is three dimensional spaces three dimensional, but our model suggests that it wasn't originally three dimensional that probably in the very early universe space was infinite dimensional, and gradually kind of cooled down to be roughly three dimensional. There's a decent chance that there are dimension fluctuations left over from the Big Bang, and it may be possible to detect those dimension fluctuations by cosmology experiments. And that's a thing of great interest to try to nail down. I mean there are there are a bunch of bunch of totally weird effects that one wouldn't expect from standard continuum generativity that our models suggest. Thank you so much for your answer I'll have to read more into this because it's quite fascinating to see like the the universe as a weapon I suppose would be the nice punch. Well yes, and the other thing if you really want to get some sort of computational about it here's, here's a bizarre thing. What's happening is these functions that are applying that are the events. What are the atoms of space. What are those atoms of space. They're essentially free variables. They're things which are like the variables that they're, well that they're escaping bound variables inside lambdas. So in a sense the whole of the structure of our universe is escaped bound variables in some sense. So that's some that's a very bizarre I mean that, because that's what's happening is that that it is creating new new atoms of space which correspond to new variables, each with their own in effect you you ID. So yes, so that that I suppose that justifies not using Sebastian. We need, we would need to pass a room for other for other questions. If you're really interested in this stuff I recommend a we do a bunch of live streams about these things be. And I do a bunch of Q&A is in those live streams be we have a summer school about our physics project and actually winter school also about our physics project. And if you really want to dive in deep, I recommend that. I think is in the zoom chat if you look to the right there. So that was a really awesome technical physics question physics question we are going to back it up though and do a more a lighter personal question. So you probably have very many, but what is your favorite and strongest contrarian opinion at the moment, something that many people may believe to be true, but that you know or you think is certainly wrong. Oh boy, this is a bad time to ask that question you know as a science person watching a pandemic take place I have many, you know, it's, I've been interested to see kind of the relationship between sort of the science that I know, and things that have happened, and I've been a little bit disappointed by, you know, I make sort of science predictions at least I think of them about what's going to happen. And it turns out that's not what happens, either because the science goes another way, or because the politics and general societal pressures go in a different way. But I suppose the thing that that I'm, I am curious about right now it's not really a contrarian opinion but it's a, it's a, it's a, it's a thing. There's a whole model of, of physics, and this whole idea of this sort of multi way graphs and this multi computational this distributed updating and so on. Turns out that's probably applicable to the immune system. It turns out that that one of the things that's happened in biology biology is an area that doesn't tend to have much in the way of theory people don't believe in theories they just say, let's do the experiment let's do a clinical trial let's see what happens. And, you know, there's a theory we don't believe it now sometimes they're right not to believe it because biology tends to be just a really complicated. You know, it's like a big program that's been built up over the last three billion years, and it's a big mess and it's full of gunk, and it's hard to understand what's going to happen. Sometimes there are principles that are useful like one that was from the past is, you know, when genetics was being developed and people would like, they're all these different effects and genetics and then people realize that there's this molecule DNA that just stores digital data, once you understood that idea, it became very clear what was going on about a lot of questions in genetics. In the immune system there's a lot that's just not known about how it works. And the, the thing that there's actually an old model of the immune system that kind of got abandoned. And no new model arose that of any sophistication at least. And I kind of suspect that this whole multi computation process of all these update events and so on, is basically what's happening in the immune system, and that instead this is what lays out in, in, in physics as physical space is shape space in the immune system so you have these antigens and antibodies and so on, each one defined by a certain shape. And you can think about, well when we talk about this branchial space and physics, you can think about that as laying out the space of possible shapes in the immune system. I suspect that for example immune memory is, is very much associated with kind of this dynamic network of interactions between these different kinds of entities in the immune system. And this is something that what once one understands it a bunch of things that go on about immunology will probably be really obvious, but they're not at all obvious right now they just seem completely mysterious. And it's like, we don't know what's going on we just have to do another experiment so I suppose that's my, my, I don't know whether that's a. But my main contrarian view is that, that it's worth doing theory in that area, because one might actually be able to come out with conclusions one can figure out, rather than just doing experiments and hoping for the best so to speak. All right. Thank you. So I have another raised hands. James, do you want to take the microphone. Thank you, Stephen thanks so much. Just for the next time I'm up at 3am thinking about all of these things you mentioned a couple of points here that I just like to walk away with some clarification on specifically you talked a lot about metamathematical theories and topology lambda calculus and multiple Turing machines and these bring up questions, ranging from chaos theory girls incompleteness in the halting problem. And do I have to worry about a sudden universal collapse as soon as the universe realizes that it's operating at a finite space and can no longer continue over these sort of just like analogies to help us understand what's actually going on or these actual concerns that affect physics. That's the real thing. I mean in the sense that, you know, the good news is, we're almost certainly not in a halting universe. The good news is, if we were near the center of black hole, we would have to worry about halting, because that's what it means that's what a space like singularity is is the end of time. It's not the ending of a computation, but it looks as if we are lucky enough to be in the universe. And in fact this is the generic case and this really add structure, we're in the universe that doesn't hold. So we don't have to worry and actually one of the more bizarre things is the universe is expanding in physical space we don't know whether it will expand forever or whether it will eventually in fact, in physical space, it almost certainly can will continue to expand in branchial space and in rural space so even if physical space is compressed. It does not mean that there aren't degrees of freedom in the universe that are continuing to expand so that no you shouldn't be worrying that the universe is going to end in a global sense now in terms of, well, let's see. There's a question will mathematics ever end the answer is no to that as well. What happens in mathematics is the formation of essentially mathematical black holes. What is the mathematical analog of a black hole. It's a decidable theory. So for example something like propositional logic is a Boolean algebra is a decidable theory. I mean, any question you can ask in Boolean algebra, you can just go crunch crunch crunch and get to the answer in a known amount of time. That's not true in much fancier mathematical theories like Piano arithmetic, the axiomatic theory of arithmetic. That's what girdle showed was had undecidability in it. There are things where there can be proofs that are arbitrary long that can be sort of proofs that don't terminate in Piano arithmetic. That's not the case in Boolean algebra. So what I mean by mathematics, I think, is that different areas of mathematics, you can essentially when you have it too many proofs okay so the density of proofs turns out to be analog of energy in metamathematics. So, just as the density of updates in physical space is like energy, the density proofs are like these update processes so so the, you can think about this multi way graph as being a sequence of applications of of laws of inference basically in mathematics, and a proof is a path in that multi way graph, which goes from one statement to another statement. That's that's proving one statement from another statement corresponds to a path in this in this multi way graph. And then it turns out that the when the density of proofs is very high, you have I think this is this is a new stuff this is a few weeks old so this is not yet fully fully settled, but the, it looks as if when the density of proofs is too high, there is an inevitable collapse, like the singularity theorems of general relativity that leads to inevitable decidability. And once there's decidability it means that an area of mathematics is finished. It's over you've got everything there. It's you can no longer has sort of this infinite path available. And so that's so in a sense the picture of the future of mathematics is very much like the picture of the future of physics in the future of our universe will have a bunch of black holes, where time has ended, and then other things will be happening in the future of mathematics. So similarly in mathematics will have a bunch of burnt out theories that have become decidable, but there will be other areas of mathematics that continue to expand. It's very, it's very weird thank you can make those analogies but but that's this is the my recent sort of excitement has been realizing that there are these close analogies between mathematics and physics. Thank you for this question so I'm going with the next questions about the word from language so we go back into from physics to the world from language. Can you spend on the transformation rules on symbolic expressions and how you use it in the language, expanding on the example of function definitions and why it is such a powerful concept. I mean, I think the thing one has to understand about computational language or programming languages for that matter is, there's all these sort of things that computers can in principle do. And then there are things that we humans think about, and kind of the goal of language design is to make a bridge between the way we humans can think about things, and the kinds of things computers can in principle do. So, one of the things that's important is to try to capture, how do we think about stuff. And this idea of, you've got a thing that looks like this and you want to transform it into something that looks like that turns out to be a very convenient way to think about things. Now, you say well what kind of symbolic expression represent. At the beginning we thought of it as representing programs we thought of it as representing mathematical expressions. Then we realize it also represents graphics it also represents user interfaces. It also represents running programs. It also represents. Oh just all kinds of different things. And so this one idea of symbolic expressions gets expanded to represent all these different kinds of constructs. And then it turns out that this notion of I've got an expression that looks like this. I want to transform it into one that looks like that is just a very powerful thing that maps very well into something that we humans are good at thinking about. Now in terms of, of what a typical piece of Wolfram language code looks like for example, object oriented programming, what does it look like a more from language. Well there isn't such a thing, because all you're doing is you're saying, you know if you say I'm going to make an object. It's going to be called G or something it's going to be a G like object where you just have the head G, and then you have its arguments or whatever the payload, whatever that thing is like. And then if you want to make a method for doing something with you just say F of G of X blank or something whatever the innards of the GR colon equals whatever. So they're just saying you're taking this thing that is now the sort of object that is symbolically tagged in a sense with the head G, and now you're saying what to do with it. And that's just something you can do just directly in terms of a transformation rule on a symbolic expression. You don't have to introduce sort of a meta level of talking about objects, and so on. The thing is, again, in terms of types the language doesn't have any types, or has one type which is a symbolic expression. Now that doesn't mean that internally, it's not, you know, the actual hardware of computers is very much based on things that have definite types. So there's lots of effort internally to convert what is specified in terms of symbolic expressions at the top level, and things that are optimally run in an actual computer as a computer exists today. And actually right now we're in the middle of a, of a giant project to make a much fuller compiler for our language. That's a giant exercise and kind of theoretical type theory and so on. But, but, you know, the, the main, the main point is that it's, it's just a, I mean, if you look at. But when you look at design of all from language. One of the things that I've worked hard on is to maintain a coherent design across the building of you know 7000 different functions, and lots of different domains and so on. And it just turns out once you have this idea of symbolic expressions and transformation rules on symbolic expressions, just a huge number of things kind of fall into place, whatever you're doing whether you're doing computational geometry, whether you're doing, you know, geo computation, whether you're doing other kinds of things whether you're doing things with you know cloud processes and so on. It's, it's just it's a very convenient thing. I'm not sure I'm not sure that I have a great other meta thing to say about I will say I will say one thing that this whole idea that it's possible to do computation by having symbolic expressions that you keep iterating until you reach a fixed point. I'm in a sense glad that I don't know, didn't know the things that I know now about physics and term rewriting and so on. That I didn't know those things 40 years ago when I started inventing this stuff, because had I known those things I might have been very scared, because the fact is, it's not obvious. For example, the universe, as it runs in physics is a consequence of a non terminating term rewriting system, yet in our language, the language is based on the idea that you're going to get answers that things are going to terminate. So that's kind of a strange sort of a strange correspondence, and the fact that it is a practical thing to just do term rewriting, and then, and go to fixed points is a non trivial empirical fact about our language. Realizing that if you type x equals x plus one into Wolfram language where x has not been given a value, what is it going to do, it's going to go into an infinite loop, it won't actually be an infinite loop it has God rails and so on. But it term that that is something where you might have thought that you know x equals x plus one would just blow up the whole language but it doesn't. And there in a sense these are empirical facts about the way humans think about computation that the things like that end up not getting in the way. But there's probably a lot more to say about this, but that's probably about as much as I can come up with immediately. Thank you that was a wonderful response. So, next we're going to go back into the more computer science realm in pork Sir Thomas, aka Bobby towers as on discord. A really fascinating thing I heard you say about computer scientists was that they tend to have an aversion to heuristics, and that a big surprise you experienced from alpha was that you found heuristics to play such a large part in interpreting natural language. Let me know precisely how badly I've misquoted you. And also, he's curious how from your perspective, the rapid shift to machine learning has changed the landscape of computing are classical algorithms going to become obsolete soon. Okay, so a couple of things to say so no your quote is actually fairly accurate about heuristics I mean, in for many years and designing wolf language. I always wanted to make sure everything is very precise. It all. It's very well defined, you kind of know what to expect. Then we start to building wealth now for where we wanted to have just pure natural language whatever somebody says, we got to do sensible thing with it. And what we realized is that you can't do that in a precise way human natural language doesn't work in a precise way it's full of hacks. It's full of weird historical coincidences. And the thing that I learned, which surprised me was that heuristics kind of have a logic of their own once you have a giant boatload of heuristics, you start to kind of understand how heuristics interact with each other. And it's a different kind of thing than what you expect with sort of precise sort of axiomatic programming language construction. And that's some, that's about heuristics I mean it's a very scary thing you know when you're, when you're doing like unit tests let's say for natural language understanding you might think, you know that I don't know you have a test that says, you've got, you know, 49 cents, what does that mean it's some piece of money, 50 cents. Oops, no that's the name of some rapper somewhere. Oh no somebody else. You know you think it's a modular thing and you're going to test you know 25 cents as a separate test. Well that's fine until somebody comes out and says, you know now I'm a famous rapper and I've got that name. And then, then, you know, it just is bizarrely non modular and messy in that in that sense. Now in terms of machine learning. It's sort of interesting the you know I think what we've seen is an evolution of machine learning that's very similar to the evolution of things like linear algebra, where, you know, there was a time when, when sort of computational linear algebra came into existence it allowed computer graphics to develop et cetera et cetera et cetera. There was a period of time when sort of it looked like everything to be solved with linear algebra. That was the 1970s or something. That, you know, machine learning is again a very useful methodology. It's very convenient for many things it will not be the, the, you know, the full story let me give you an example. When if you're trying to, you know you say, Well, why do we need programmers what why not just have you know why don't you just you know tell the computer what you want in natural language and have it just do everything. So I had an interesting example of, of sort of understanding that process I was writing a book about Wolfram language, and I had exercises in the book and the exercises consist of saying, here's a statement in English. Now write a Wolfram language program that does this, the beginning of the book, when the programs are really simple, that was working just fine I could write an English language sentence people were it meant, et cetera et cetera et cetera. By the end of the book, the sentences if I was going to describe a particular program, what bizarre they sounded like pieces of legalese and patent applications or something like that they were, you know, full of, full of complicated hair to be able to describe these operations that were corresponding to a program and I realized that's why we built this computational language, that's why one would have a programming language as well, because it is a succinct way to describe these kinds of computational operations, much better than the kind of vague thing that we have a natural language for short utterances. It's great you can use Wolf mouth for you can type in you can even do that in Wolfram language, you can type a kind of a short utterance and it will convert it into actual Wolfram language code, a short piece of natural language you can do that with anything longer than something short, the tower just doesn't have strong enough foundations and the tower will kind of topple over. There's too much sort of uncertainty in how the language is interpreted and so on to be able to do that. Now, you know what's happening with machine learning and algorithms is a lot of what is in Wolfram language is meta algorithms. So, let's say you're solving some partial differential equation or something that might be many different methods for solving that equation. A big part of what we end up doing is trying to automate the solving of that equation by having essentially a meta algorithm which picks between those algorithms. For these types of meta algorithms we've long used essentially machine learning methods to build those meta algorithms and that's a very useful thing. If you get the wrong branch in the meta algorithm. Well it's unfortunate, but it's not disastrous. When it comes to the underlying algorithm. It's not something for which you're likely to be able to use kind of the fuzzy machine learning kind of approach. And so, you know, what we find is there are particular applications that we've long used and and increasingly use machine learning kinds of things. And there are other places that are just sort of hard algorithms where I don't expect that machine learning will be particularly important or relevant. For the, you know, it's it's like, like when you're simplifying mathematical expressions. Okay, that's a place where there are hard precise transformations you can make. And if you kind of fuzz those out you'll just get the wrong answer, but deciding which transformation to make is something that you can potentially do in a machine learning kind of way. The thing for us is putting in functions in our language like classify like predict, like feature space plot and so on, that are underneath using machine learning to do things, but which are sort of elements of the language and can be called in other places that that seems to be a pretty powerful thing to do. We have one more we have somebody with their hand raised here, Jacobo cordova flexiana has a question and just want to remind the audience here that we are prioritizing people with their hand raised because we want to get we want to hear from you all we want to get you to ask your questions directly. So, let's hear from Jacobo. Thank you so much Jordan. And nice to meet you, Stephen. And congratulations for your 30 is very exciting. I have some question about all the space is like this atom office office space who are fixed in some place we don't know where is that. And we are recreating through a complexity rule. You know, when I made this movement, the atoms are recreating by kind of complexity for millions of years of evolution. Are you trying to find the rule of complexity of the universe. And are you using the constant of blank the mass of electron the light speed, trying to figure out one atom of hydrogen or something and if is this the way you are trying to figure out how can I try to play with this computer trying to find the. Okay, so first point is that there are two basic approaches one might take in finding a fundamental theory of physics. One is to take to take physics as it is, as we know it, and to try and reverse engineer from that to see what what could be underneath the physics as we know it. And the other approach is to say, let's start off for the very simple model, and let's see what consequences models of this type have, and then build up from those very simple models and see whether one gets something, which one can recognize as being like physics. We're more doing the second one of those things than the first. When you ask about, you know, speed of light planks constant massive the electron things like that. Okay, so the speed of light is just a scaling factor it's just the definition of meters. The thing that the only fundamental thing in our theory is the elementary time. So there is a there's an elementary time and the translation of the elementary time, and the distance in space is just the definition of a meter basically, which is defined by the speed of light. Similarly, the definition of energy comes from planks constant so planks constant again, it's just a thing that is a scaling factor that's associated with our human way of parsing the universe so to speak. Now something like the mass of the electron is something which in principle is derivable from our models. We haven't derived it yet, but it is a principle derivable from our models. And what I think it's going to happen, it's a pretty tricky thing because you might think the electron has a definite mass point five one on me V whatever it is right you might think it has a definite point but even in existing particle physics we know that that isn't true. In existing particle physics the mass of an electron depends on essentially the energy scale at which you look at it. The electron has what's called a running mass. And so it has that the mass usually quoted is the mass that corresponds to essentially zero energy, looking at an electron with zero energy, kind of in a zero energy way. As you as you change the energy scale the effective mass of the electron changes. So similarly, when it's rather complicated thing, it depends on kind of one's model for the observer of the universe, what the mass of the electron will be. And so that's a, that's a tricky complicated thing in which one realizes that in addition to modeling the universe, one has to have some kind of at least approximate model of the observer to be able to make conclusions like that. So this is the thing that is sort of, perhaps. Okay, so our models of physics depend greatly on a bunch of intuition that has come from in my in my sort of experience, doing lots of computer experiments. You know, one might have assumed that if one has a simple program it would just do simple things. That is profoundly not true. And it is sort of the experience from realizing that that's not true. And I think that's connected to this whole kind of ideas about how physics might work. Now, when it comes to saying, can you find sort of a mechanical identification of, oh you have this thing and you can sort of think about it in a very mechanical way. In a sense, the, that can be quite sort of dangerous because the things that we're familiar with at our size scale are really different from the things that might exist at 10 to the minus 100 meters, and things like that. So it's a little complicated and you know, in my efforts to explain what's going on I try and use analogies with things which are at scale sizes that we know, but those analogies are decently accurate but the true story has to be coming from kind of the underlying computational processes and a bunch of mathematical physics connecting those to things that we know about physics and so on. So it's a, it's a slightly more complicated chain of reasoning than, than I'm probably giving it to doing justice to here. Thank you so much. Thank you, Yacobo. So we have a garden order, I think, and we have Robert with another raise hands. Thank you very much. This is like really stimulating. So I was wondering, you mentioned that, you know, based on the models there, there should be anomalies in the universe that are observable. And I'm wondering if there's any proposed experiments just like you know, there were all the experiments that kind of show that Einstein series. Yeah, so I'm just wondering if there are, is there's like sort of an XPRIZE for proving this, you know, not yet. I mean, see the thing is, there are a lot of experimental physicists who come to us and say we'd really like to do an experiment on this. And we say we'd really like to tell you exactly what to look for because we don't want, you know, don't fly a spacecraft and go look for something when the calculation wasn't done correctly yet. I mean, it's just the difficulty with these kinds of things is we know there will be dimension fluctuations. What will be the effects of those exactly what happens to a photon propagating through a dimension fluctuation we don't really know that's a piece of essentially difficult classical electrodynamics, and hasn't been done yet. In other words, I think we're still a few years away from knowing exactly what experiments are worth doing. I think the ones. I mean, we suspect. Okay, so the several different classes of experiments one are essentially making gravitational microscopes. So what we want to be able to do is make a gravitational microscope powerful enough to see the underlying structure of space. In other words to see below the continuum structure of space and the best candidate for where that happened is a super critical spinning black hole where essentially what seems to happen is so black holes as they're observed. They, there's a limit to the rotation rate of black holes that's been observed, and that limiting rotation rate has some consequences today. But right at that limit, we think that essentially the structure of space is held together by a small number of causal edges, and that if the if the black hole was spinning any faster, a piece of space would break off. And that's kind of why the black hole doesn't spin any faster, but right where the black hole is spinning at sort of maximum rate, we expect there to be kind of a small number of causal edges kind of holding the universe together. And it is possible that there will be measurements from gravitational waves and other things that could be made in which you would see those effects, but actually calculating what you would exactly see and how exactly to detect it that's just a lot of work hasn't been done yet. But also a bunch of experiments we think might be possible with people's attempts to make quantum computers in our models, the, the, the ultimate quantum advantage of quantum computers probably isn't really there. But there's still a lot of value in kind of looking at the sort of making computers out of physical things that aren't just semiconductors and so on. It's my suspicion at least that there are some effects in essentially many body quantum mechanics, things that people use to make quantum computers, where we will see the effect of what we call the maximum entanglement speed so in physical space there's a speed of light, which is the maximum speed at which influences can propagate and branching space there is also a maximum speed. We call it the maximum entanglement speed. We don't know its value. We just know that there has to be a maximum speed. It's essentially the maximum rate will quantum states can affect each other. And it might be possible in one of these quantum computing setups to observe that maximum speed we don't know what it is a rough estimate we have of it and it has weird numbers because it's about 10 to the five solar masses per second, which is sounds very big, but the good news is that these, these systems operate on short time scales, and they have a large number of atoms in them. And so it might be possible to even if that's the scale size might be possible to reach that that that scale size would also be reached in mergers a very large black holes. The size of the center of our galaxy merged. Then if we're right about that scale size, there would be an effect from from the way that the black hole merger would happen will be limited not only by the speed of light but by this maximum entanglement speed. The bad news is an estimate of how often black holes the size of the one at the center of our galaxy merge is maybe half a dozen of them have merged in the history of the universe. It's not a, it's not an experiment that's easy to do. So we have to, you know, that's, again, that there are sort of issues like that that come up but yes we're, I mean, it, we'd love to have a more sort of complete picture of what actual experiments can be done and I think I would, I would, I would say there's quite a lot of enthusiasm people that should go and do the experiments we just need to know what experiments they should precisely do. Good question. Well, we have a lot of enthusiasm. We are getting close to time we could talk to you all night Steven, but about 20 more minutes I think we're probably going to wind it down. Right. Okay, that will work for me to I perfect. And so with that being said, I know that Edward Hughes had asked a question in the discord. In regards to a speaker we have tomorrow and I also see that Edward has his hand raised now so I will go ahead and let him ask the question directly. Thank you, Steven. Thanks. Thanks for your time talking to today. I was wondering if you had any familiar familiarity with the work that Gerald Sussman has has done with structure and interpretation of classical mechanics, which is similar in this approach of using a computer to sort of feel our way down like the sort of possible, many worlds we might be in and like what rules actually run them. So I was wondering wouldn't be quite my interpretation of Jerry Sussman's work there I think you know his primary interest has been in doing things like predicting the behavior of the solar system over over the course of long periods of time. And that's that's a problem of kind of solving gravitational and body problems. What's interesting about gravitational and body problems is they're hard to solve. When Newton was originally working on these things and tried to solve the problem of the motion of the moon for example, famous famous factor in history of science. You know Newton had this whole theory of gravity and so on. And he has this big chapter where he tries to predict the motion of the moon, and he gets the answer wrong by a factor of two. And he ends the chapter by just saying it's wrong by a factor of two. Now some people might have said, Oh, that means the theory must be wrong. In fact, what happened is that the calculation was really hard and he didn't manage to do it at that time it took another 150 years for it to be done decently accurately. And what Newton in fact already knew. I think he said, you know, to know the effect of the motion of many planets orbiting according to mutual gravity. And it exceeds if I'm not mistaken the force of any human mind. And so he was kind of understanding the beginnings of this phenomenon of computational irreducibility that I've studied a lot. This idea that even though you know the rules by which the system operates, actually knowing what will happen can be very hard. You know, Terry Sussman, for example, in his sort of digital or a efforts and so on, and Jack wisdom and so on. Ran into was this whole question of even though you know the rules, actually working out the consequences to see what it will mean for the solar system, you know how many planets might have been ejected in the history of the solar system. The solar system as it exists today is sort of the result presumably of a certain degree of natural selection among planets. We don't know how many there were originally it's hard to tell. It's hard to tell all these things that that's kind of a sign of this phenomenon of computational irreducibility. So this question of whether the, for example, even the three body problem the earth moon sun, mutual gravity problem. It's likely that that can that you can make a universal computer out of just that three body gravitational problem. Nobody has figured out quite how to do it, but it's likely that that's the case. In terms of the relationship to what we're doing. There is one sort of nasty extra complication in the Jerry Sussman kind of way of doing celestial mechanics and so on which is the the traditional approach to doing celestial mechanics involves calculus and involves continuous numbers. It involves saying the position of the earth is exactly this to 1000 decimal places or a million decimal places or whatever else. There's no sort of computationally finite description of like what the position of the earth is it's sort of an infinite sequence of digits that describes it. And that that idea that you can have this sort of infinite precision. It kind of takes it out of the realm of what one can think about in the kind of the computational way that I've been thinking and people, people in physics had thought for a very long time and I think, I think probably until I think in the 1980s sort of finally convince people that maybe it was worth thinking about kind of not having physics be based on calculus and real numbers and so on. Because when when physics is based on calculus and real numbers computation and theory of computation and Turing machines and all these things about computability and so on, they don't really apply, because those things are really creatures of discrete integer like operations. They have a slightly separate branch. This this kind of branch of thinking about kind of the, the continuous mathematics version versus the discrete mathematics kind of approach. So we have another couple of questions. The next one is from Christopher small who is up in the line. Hi Stephen. See, thanks for your thanks for your talk. It's really cool to see. Yeah, just the whole presentation. Thanks for making that stuff work. Yeah, yeah, I know it was definitely was a pleasure getting to chat with you and and work on the project and looking forward to carrying it on. So the question I have is actually a physics question though. One just a point of clarification when you were describing entanglement in these in these multigraphs or hypergraphs. Are we talking about quantum mechanical entanglement there. Yes. Yeah, and so as a follow up question to that then. Can you describe. I'm trying to, I'm trying to get my wrap my head around how this helps us think about quantum collapse and kind of in particular as relates to quantum entanglement but but but just in general does this, does this give you a way of kind of taking a little bit more sense. Yes, did you describe that. Yeah, so, so, so basically the point is this, that, you know, in this multiway graph. There are many different histories. The universe is following many histories those histories are branching they're merging. There are lots of different histories. Now, the surprise is that you and I think that definite things are happening in the world. How can that be if the universe is following all these different histories. How can it be that we think definite things are happening. Well, here's where it gets really, really funky, which is we are embedded in this universe. So our brains are also doing that kind of branching and merging of histories. So at any moment, our, our brains kind of are are also part of that branching and merging process. So in a sense the story of quantum mechanics is a story of how does a branching brain perceive a branching universe. In other words, you know, you'd think, oh, there's this branching universe how come we're not seeing all these different possibilities. Well, it's because our brains are branching as well. And the key thing is that we have the idea that definite things happen. And the next question is, is it consistent to think the definite things happen. So we might say, in our brains there are these multiple branches going on, and we conflate those branches in a sort of technical sense, we can say we, we add a critical pair lemma to our theorem prover, we're saying they're actually two different branches, but we're going to conflate those two branches. And we might say, great, we can do that, but it might be the case that that just doesn't work that the universe that what having conflated those branches, there's then an inconsistency what later on, but it turns out this phenomenon of causal constraints guarantees that that will not be the case. In other words, it guarantees that eventually the universe will arrange itself so that it will have been consistent for us to assume that those different branches of history were conflated. Now, a way to think about it a little bit more is in physical space, we are very big compared to the atoms of space. So let's say that the elementary length is 10 to the minus 100 meters. For example, we are, you know, huge compared to that we're actually on the scale between the maximum size of the universe standard 26 meters, and the elementary length, we are definitely on the big end in terms of things in the universe. Now the next question is that's physical space. The next question is how big are we in branchial space. In other words, how many separate quantum histories, are we encompassing at a given time. That's the answer to this. But I think what's happening is that we are just as we are effectively sampling very large numbers of atoms of space because we're so big compared to the atoms of space. We're also sampling many different threads of history, so to speak. And that's an essentially the reason that we perceive classical mechanics to be the way it is, is that we are essentially averaging over many threads of history. And it is actually a rare thing just like if you're looking at a gas. It's a rare thing to find a single molecule that really matters. You have to do you know hypersonic flow where you have to do Brownian motion, something like that to find a place where where a gas is not just a continuous fluid. That's because we're pretty big compared to the molecules and a gas. And so similarly in quantum mechanics is the same type of story, because we're big compared we we encompass many threads of history. It is difficult for us to see these quantum effects that are associated with individual threads of history. Now if you really want to something funky that is a very new thing of about a week or two old is the analog of quantum mechanics and mathematics. So one feature of mathematics is there may be many proofs of the same theorem. Those proofs correspond to different parts in what is essentially a multiway graph. So essentially quantum effects in mathematics are the existence of different proofs of the same thing. And most of the time those proofs are probably continuously deformable into each other. So there's no kind of inconsistency between two proofs that are both going to lead to the same thing, but it may be that there's some non trivial homotopy in the structure of mathematical proof space that means that you can get these two different proofs, which end up being, you'll essentially get quantum effects and proof space in mathematics. I don't yet know what the interpretation of that is in mathematics that's a, that's a coming attraction that still has to be figured out. Let me know if you figure it out because that's, I'll get ready to work on it. Thanks so much. Thanks so much. Appreciate it. Okay, and we have Ella here who was a speaker today that also has her hand up. Hi, Stephen, I wanted to ask earlier in your talk you mentioned something about this idea of implementing something like rule 30 using molecules as a new kind of substrate for computation. And I'm sure you hear you expand on that. And in particular, I'm interested in knowing what would you use some one of the elementary cellular automata like rule 30 or one 10 or is there some other class of cellular automata would be like more right. So, okay, so the first question is, the answer is, it is unlikely to be simple cellular automata that are the best things to implement in terms of chemical computation. I'm not thinking about chemical computation is this so, so when you think about these multiway graphs that when you do chemical synthesis you're also thinking about things like multiway graphs. What does that mean you have some chemical, you know you can do oxidation you can do reduction you can do this you can do that. Those are essentially events that can happen to a molecule, you're doing, you're taking different doing different chemical actions on a molecule. So that is when you think about what can you do to get from one molecule to another, you say well there's a series of actions you can take on the molecule to get from one thing to another. In general you'll build a multiway graph of all possible chemical processes which can happen to a molecule chemical synthesis is like logic programming, you're basically trying to find a particular path through that multiway graph of possible moves down to a molecule. So typical chemical synthesis you're finding you know what's the best path the shortest path whatever else through this multiway graph. So that's the traditional kind of idea of chemistry can be thought of computationally as this question of sort of finding a path through this multiway graph. Okay, we can go a little bit more extreme than that. I mean, when we talk about chemical synthesis like that we're saying, I can change a particular chemical species into another chemical species by doing some reaction. But what if you look at the individual molecules, and you say this particular molecule can interact in this particular way with this particular other molecule. You can also build up a multiway graph of the fate of individual molecules. When if you say, let's take that multiway graph and turn it into just a chemical synthesis thing, what you're doing is it's like probabilistic programming. You've got all these different branches of all these things that happen to all these molecules, and all you care about is the overall probability to get a particular kind of molecule. But in fact, there's an underlying structure that involves these specific actions that happen on one molecule to another molecule to another molecule, and you can get these kind of complicated dynamic processes happening at the level of individual molecules. So my current guess is that the best way to think about chemical computation is a little bit like thinking about kind of all the parts in a nondeterministic Turing machine, or thinking about this multiway graph of sort of all possible paths that you may use for interactions, and that those are actually actualized by molecules. And so when when you just say I just want the chemical concentration, you're kind of squashing out a lot of the potential computational information that's there. So, the challenge which I don't know that how to do yet is, can you take this dynamic network of how the molecules are behaving, and can you detect features of that dynamic network, not just how much of this molecule is there. But certain, you know, details of the dynamic network. And that's where I think it will be best to encode computations. Now in terms of what are the right, what are the right raw materials to do that with. I actually was playing around with combinators, which are essentially tree like expressions, where you have moves that change the structure of the tree, as at least a conceptual model for how you might make transformations on molecules. And one of the bizarre things as I was as I was thinking about in the anniversary last year of combinators the centenary of combinators last year, I was wondering what would have happened if people that I'm just really understood combinators back in 1920, and it decided to build everything in computing on top of combinators rather than on top of Boolean algebra and so on. And, you know, what might have happened is people might have made molecular computers in which what's happening is these, you know, multiway graphs of combinators and all this and we would have a very different intuition about how computation works. But the, you know, the practicalities of how to implement these things and molecules. I, you know, I'm really interested in figuring out how to do that I don't know how to do it. I would say that the one thing which might help me a lot is somebody was asking earlier about enterprise deployments of war from language. One, one interesting one is a company called emerald cloud lab, which is an automated biology and chemistry lab in the cloud, which is all based on war from language. And where you basically feed in a war from language program to specify how all these various, you know, chemistry experiments and so on should get done and you get a result sent back to the cloud. So my hope is my test cases. Can I write a piece of off language code, or should we run in the emerald cloud lab on actual molecules and actual test tubes and things, and which will compute the primes with molecules. That's my kind of test case for whether whether I know how to do molecular computing can I get something where, you know, you get those stripes on some electrophoretic gel and they're in the positions of the primes. And that will be that sort of the test case and I don't think that's I mean there are things one has to figure out to get to that point but I don't think it's completely out of range. And I think what's needed is this different understanding of how molecular computing works than this idea that we have right now of just this one thread of computation I think this multi way graph idea, whether all these different reactions happening. And that's, that's kind of where it's a fundamentally, it's distributed computation, but the computation is is distributed in the sense there are many different molecules that are all reacting in parallel so to speak. That's that's my guess and I, you know, to give to fill out a little bit more. The next sort of challenges. What is the chemical observer like that is one way of observing what happens in molecules is just you find the concentrations of a particular species of molecule. Another possibility is you've got some membrane in, you know, that might happen in biology have some membrane for a bunch of molecules are collecting on the membrane in some pattern, and then something you know then that's only a poor in the membrane or something like that. That's not something which is asking for the generic just how many of this molecule are there. It's asking a much more detailed question about the, you know, the pattern of molecules and what they do. So, sort of the challenges to figure out how can you make a measuring device that will measure these detailed properties of details of correlated properties of molecules, different from just measuring the overall concentration of molecules. So that's a least at the beginnings of thinking about that I think that I kind of suspect that this kind of multi way computation idea is sort of a key idea and thinking about chemical computation. And we think that if one can sort of untangle how to how to think about doing computations that way that it will give one the right intuition to be able to figure out how to do molecular scale computing. But one of the things is that we humans are used to definite threads of time, we're used to definite things happening progressively in time, whereas this multi computation multi way thing. And it's all about multiple threads of history. And it's just not something we humans are very good at. We tend to want to say, let's make a reference frame let's make affiliation, where we can just break things down to say this happens then this happens then this happens. And I think gradually and I think there's a place where, you know, closure and programming languages and thinking about distributed computing is important, because, you know, we gradually will get intuition as we use these things more and more will gradually get more intuition about how to think about things that don't just happen sequentially. But, but I feel I'm not there yet. I mean, it really helps to see the correspondence with physics because physics has essentially addressed these questions in a somewhat different way. And we're now able to import the ideas from physics into distributed computing to understand those better. So what an amazing way of, I think, finishing this discussion and what an amazing keynote, bringing back memories of when we finished a conference and headed like to the pub to continue the discussion. We hope you're going to be like a physical maybe next time. The UK is that kind of time and that kind of atmosphere so with this last answer from Steve we are going to thank Steven for his availability, the energy it puts in everything he does, and the inspiration he brought to the conference. Thanks Steven. Thanks very much. Thank you so much. Thanks. Okay. Bye bye.