 Okay. It's the One O'Clock Block. I'm Jay Fidel. It's Think Tech. And it's research in Manoa. We like research. We like Manoa. And we like Bjorn Jus Hansen. Hansen? Hansen. Yeah, Hansen. Hansen, who is a professor of mathematics in the mathematics department at UH Manoa. And we caught him. And we got him into the studio. We're going to ask him all about mathematics. Welcome to the show Bjorn. Thanks for having me on the show today. Great. So, how do you get to be a mathematician? Is it something, you know, am I better at being a mathematician if I'm left brain or right brain? If I know music and I have a, you know, a propensity about numbers and all that, am I a better mathematician then? Well, music does have a mathematical structure to it. So I think, I'm not sure why, but it does seem like there is some connection. Like, it makes your brain think a little bit mathematically. But even just listening to music, you know, your brain is processing that mathematical structure that's already there. Yeah. Well, you know, we hear so much about, you know, pushing the envelope on artificial intelligence computer programs we never heard before. And, you know, so many really fabulous tech things happen. And I went to a conference last year. We actually took video of it at UH Manoa where they brought in mathematicians. You must have been there or know about it, yeah? Brought in mathematicians. And it was all about predictive. And that was one of the things. Oh, yeah. That was my conference. That was your conference. Well, you know, plenty about that one. That was very interesting. And when each speaker got up, I could understand that speaker for the first 30 seconds. Oh, sorry to hear that. And after that, I was lost. But, you know, that's what I wanted to explore with you today. What is it about mathematics that makes it, you know, so difficult for the ordinary human being to understand? So are you good at music? I do like to play music. Like, I like to, you know, listen to songs on the radio and then try to play them. Yeah. When I get home. Yeah, can you? To some extent. Yeah, okay. So tell us where you went to school to learn mathematics. Yeah, so I'm from Norway. And I got a bachelor's degree there, and a master's. And I went to Berkeley in California for a PhD. So that was a big, you know, a big move. That's a commitment. Yeah. I think you did postgraduate work in Germany and Heidelberg. Yeah, that's right. And I was also in Connecticut and Cornell University and finally ended up here at Hawaii. Well, I feel you're a star. Yeah. Well, I'm trying my best. Yeah. So what is, how can I tell, I mean, from the production end of things, how can I tell that somebody is a star in mathematics? What do you have to do to demonstrate to the world that you're in the upper crust of mathematicians? Well, among mathematicians, you know, we have our research papers. And then we try to solve questions that other mathematicians have posed. And then if we think we have a solution, we submit it to a journal. And then there are the prestigious journals and the more middle-tier journals. And then it gets, you know, evaluated by other mathematicians. They try to understand what you've done. And if they agree, you know, then... You're a star. Yeah. Then you made one more step in the right direction. So, I mean, you know, in my occupational history, whenever I've taken a job, you know, I find that my way of thinking changes. If I, for example, if I'm operating a cash register in a food store, I find my mind goes graphical on me. I see the numbers in rows and columns, like on the cash register. So does studying mathematics change the way you think? That's a good question. I haven't taken that many breaks from studying mathematics. So, in a way, maybe to really answer this question, you would want somebody who, you know, does something else for a while, then they do it. Right. But it is, yeah, it can be an intense experience to try to figure out some question. You know, sometimes it's like you get more and more frustrated with yourself until the moment when you figure it out. So it can be... Very gratifying. Yeah, then it's gratifying, right. But then sometimes you think you've figured it out and then the next day you find a mistake and you're back to square one. A mistake in what, though? It's just the kind of thing where you have a blackboard that's as big as the whole classroom and you take a piece of chalk out and you work the chalk down in the stub, writing a very long mathematical formula. Is that what it's like? That can be part... That can definitely be part of it. It's not only, you know, about formulas because there can be mathematical structure and all kinds of things. You know, in the operation of computers or in, you know, you have all these applications in physics to motion of different bodies. You know, Bjorn, we need an example. Okay, we need an example. Maybe your PhD or your postgraduate work is a good example. But here I am thinking that my calculator, which adds and subtracts and divides. Maybe it does its square roots too. That's mathematics. But you're not talking about that. You're talking about something else. You're talking about what? Theoretical mathematics. Mathematics that solves, you know, very interesting complex problems. What kind of problem did you solve in your PhD? Yeah, so for this, actually, it goes back to this whole idea of a computer because particular computers are, in a way, they can be seen as examples of an abstract mathematical concept. And historically, I think this is actually how computers came into being that there was this mathematical concept of what was called a Turing machine, which is some... Turing. Turing. This was just named after the guy who came up with it, Alan Turing. Okay. Which is a mathematical model of how a mechanical machine, or perhaps electronic, could solve problems. And what Turing showed is that you can not only make one machine for each task. You can make one machine that will wash your clothes and one machine that will be a telephone. But you can have a machine that accepts tasks as its input. And then the machine reads the input. It reads about the task and then starts doing the task. And this was really a breakthrough. This was around 1936. And then it led people to think that, you know, maybe we can build these things because Turing's mathematical model seemed like the kind of thing you could build. And that's what we have now, right? Every time you install another app on your phone, you're getting another input to this universal machine that sort of can do anything as long as it just involves processing the... processing bits of zeros and ones and maybe using whatever devices are hooked up to the machine like a screen or a mouse. I think you're saying that whatever you do on the app or the machine that seems to be in language is actually in bits and bytes. And it's all, at the end of the day, the machine is working at the assembly language level. It's working on numbers only. That's all it really does, yeah? Right. At the level of just bits. But historically, this is very interesting. I actually wrote a book review of a book about Alan Turing recently, so I got a chance to go back and look at the history of this. So I think actually what happened was that Alan Turing was a young guy and his advisor asked him, you know, maybe you can make something like a model of computation, mathematical definition of what that would mean. So it's sort of like a function. Maybe, you know, if you think back to algebra or calculus, you have functions, you know, like x square, you know, square root of x. Now you could have a function. We've got a little pad for you, so why don't you show us on the screen what that means. Okay, this is working. Okay. Yeah, there we go. All right, there it is. What are you drawing now for us? Yeah, so let's say we have a function. Mathematician would write it like this, so f of x. So here's the input, x. And then f of x is what comes out, so maybe it's x square. Okay. That's algebra, isn't it? Yeah, that's algebra. And now you could imagine instead that x is just your app, your app on your phone. Okay. And then what comes out is basically the behavior. So it's, I guess, a little more tricky when you have interactive computation because what Turing described originally was just computation that will calculate something for you. Okay, we'll find the number of ones in the sequence of zeros and ones. So it's a mathematical, at the end of the day, the app part of what you wrote on the screen, it's a calculation or a series of calculations. And at the end of the day, you're going to get a series of calculates, a series of results back, and those results are interpreted into an answer to the question you posed. Right. So, yes, a lot of it is computing functions. You know, if you look at how computer programs work, you have these subroutines, and a subroutine is like a function, like this, you know, like this x square here. This could be a subroutine, like if I want to do, you know, a square root of one minus x square, sorry for the handwriting. That was good. Then you could say, well, a subroutine here is I'm going to call the x square function. So maybe the whole thing is, could be a subroutine also within a bigger program. This could be g of x square root of one minus x square. But then it's not just about, you know, computing things and also you have instructions. You say, let's put the red pixel on the screen. Let's check if the mouse has been moved. Right. But just the idea that, you know, a computer doesn't have to just do one thing. It can do whatever, the programmer and the people behind it, whatever you can imagine, the computer can be made to do it as long as you describe very precisely. What it is that you want. So this was in around 1936 with Alan Turing. And I think people very quickly started trying to build such things. And it was not just mathematics. There was a lot of engineering. Like how should we do it? Should we have some kind of tubes? Or should we have, you know, electronics? Processing. Yeah. Should we have, should it be electronic or electromechanical? But it was important to start with that mathematical basis. Yeah. So it sounds like since 1936 and Turing, computers and computer science has gotten into using the math but not thinking much about the math. Because the mathematicians are off this way, right? You don't spend all your time on computer science. You're ahead of computer science, aren't you? Right. That's one way to say it. Yeah. So because now we get other mathematical questions about the model that Turing had. So with this model it turned out that certain functions, you know, like this, like this x square here, certain functions can be computed. But then there are other questions or other functions that could be defined but not actually computed. And the most famous one is if you have one of these instructions to the Turing machine, will it ever give an output or will it sort of keep running forever? Or will you get the spinning beach ball that goes on forever? And one thing that Turing proved is that there is no way for the Turing machine itself to figure out whether it will go into an infinite loop. So that was the first example or something that was definable in the sense that you could say exactly what the question was. But it wasn't computable that the computer couldn't find the answer. So you go further than the computer. You're looking at the ability of the computer to go to the next level. You can be ahead of the computer that way. And speaking ahead, we're going to take a short break. And ahead I would like to talk about that program that you put together last year at UH Manoa about predictive analysis. And about what that is and how you can use that to predict all kinds of things that can help us. We'll be right back. This is Bjorn Juice Hansen, Mathematics Professor at UH Manoa. And he lives in a different set of symbols. We'll be right back. Hi, I'm Bill Sharp, host of Asian Review here on Think Tech Hawaii. Join me every Monday afternoon from 5 to 5.30 Hawaii Standard Time for an insightful discussion of Contemporary Asian Affairs. There's so much to discuss. And the guests that we have are very, very well informed. Just think we have the upcoming negotiation between President Trump and Kim Jong-un. The possibility of Xi Jinping, the leader of China, remaining in power forever. We'll see you then. Okay, we're back. We're live with Bjorn Juice Hansen, he's a Mathematics Professor at UH Manoa. And he's trying to help us understand what mathematics is about and sort of bridge the gap between our ordinary, humdrum, you know, work-a-day lives and this kind of theoretical mathematics, which governs actually so much of what we do. But I was so amazed at the predictive analysis aspect of this program. They came from everywhere and they were a very exclusive group and they got up and they had Blackboards too and they showed you how to do predictive analysis. But what is predictive analysis, Bjorn? And how do you do it? Yes, actually, the keynote talk by Lance Fortnall, I think is the one you went to, there was a lot about this idea about bounded rationality and so the fact that you could, in economics, you could have this idea that people are rational actors, they do what's in their best interest. But then maybe a missing piece there is that maybe with sufficient amount of time and thinking it over, people will maybe sort of calculate what's the prudent thing to do. In practice then life is happening all the time, you've got to make decisions all the time. You don't have that much time to figure out what to do. So you have bounded rationality, not in the sense of bounded wisdom, but in the sense that you have only so much time and you've got to try to figure out the right thing to do. And this fits in with the Turing machines that we talked about before the break because in that mathematical model the machine is operating in these discrete steps just like a real computer, it does one little thing and then another little thing. And sometimes it could be that the task at hand just cannot be done quickly enough. In that case, don't just get a bigger machine, a faster machine, because that's available usually. That's not going to be practical if you're trying to, let's say you're trying to factor a really big number. How big is really big? Well, maybe a thousand digits, let's say something like a thousand digits is very easy for a computer to print out a thousand digits. But then finding the factors is actually an open question whether that can be done reasonably fast. And it's related to this sort of the main open question in the intersection of mathematics and computer science which is this P equals NP problem which says something like if you can verify the solution quickly then could you have found a solution quickly? Maybe when you hear that you think, obviously not, just because I can check that. If somebody makes a delicious meal, I can quickly check that it was delicious. That doesn't mean I could have quickly made it. But you can make a very precise mathematical question almost like an equation, totally precise whether you may be in this model, any problem that can be, well you can check the solution pretty easily. Like if you factor two numbers, or you factor a number into two factors rather, it's not that hard to check that it worked. If I have a one thousand digit number and I have two numbers that supposedly are the factors, then for a human it would be too hard. But for a computer it's pretty easy to just multiply them and see if you got the right thing. But now to actually find them in the first place, you sort of have to start all the way from the smallest numbers and try to see can I divide the big number by this? It's trial and error. Or even like brute force, as we call it. You start with the smallest number, you just do all of them, and after you've done ten to the one trillion divisions, you'll know the answer. But that's just too much. And no matter how much computer hardware, unless you have some unreasonable budget. We don't have enough time and resources. So I think really, and I remember this from the program, is you use trial and error and you use it within the limits of your resources. And that's not what people grow up in the fifth grade thinking about mathematics. They don't see mathematics as trial and error. And the other thing that comes out of what you said is that if you're doing mathematics with big numbers like that, you absolutely have to have a computer. You can't sit there and write this out yourself. They're going to go nowhere. You have to have a computer to do the operations that are necessary to prove your theory right or wrong or do the trial and error. And all of this feeds into predictive analysis, right? Yes. So what are you thinking of when you say predictive analysis? Well, you said before about the economy or the stock market, for example, or the election today. If I have data and big data is available as never before, big data about the earth, about our environment, big data about our social processes, about the billions of people that live on the planet and all this, you can take mathematics and you could predict, couldn't you? I know you have to make some assumptions in there. You have to reduce those assumptions to sort of the rational thing you described. But with those assumptions and with your ability to do the math and do the trial and error kind of analysis, you can actually make a pretty good guess about who's going to win the election, about how long it's going to take for climate change to inundate us and how business is going to be on the stock market. Am I right? You can take this kind of analysis and put it on anything. Yeah. I mean, I don't think it was specifically maybe for my conference, but mathematics in general. And I think that's one thing that Nate Silver did, is that he said let's take some of these mathematical methods that maybe are well known to statisticians and mathematicians and just apply it to something people really care about like the elections. And that seemed to be very successful. So do we do that? I mean, let me put it this way. Are we doing it enough to really take advantage of the work you're doing? Well, in my work, it's in a way still a pretty theoretical looking at many theoretical questions about, let's see, the stirring machines. And the way I try to excuse myself and my colleagues, and which I think is valid, is to say that you might be doing something very theoretical, but then that impacts another question, another topic that's a little bit less theoretical, and then that in turn has an impact on something yet less theoretical. And maybe this wave from your work to watch the application dies out before it reaches the applications. Maybe not. So I think there's, even if a question is purely theoretical, for example, about different problems that can be defined but not solved, and hypothetical like if I could do this one impossible thing, would I be able to do this other impossible thing? This might be, on its face, not connected to any re-application, but mathematics is like a vast field of topics and questions, and there's a lot of interrelations, that somebody solves something in one area of mathematics, then somebody in another area that wouldn't know how to solve this first thing, they know that it helps them in their area, and so things spread. Meaning I grab this approach over here, I grab that approach over there, I put them together to work on my approach, and now I have an approach that uses all three of them, and it's presumably more powerful than any one of the three, in terms of solving that problem. Yeah, it could even be sort of like a black box, like you say, I want to know if this one claim is correct or not, I have no idea how to prove it, but then I hear about these people in a different area of mathematics, they've solved it, they certify that it's true, and now I can start using that as an assumption, as an axiom. Okay, so when you use something like that, assuming you're satisfied, somebody working, a collaborator of yours, maybe in Europe, maybe in Asia, wherever it might be, you're reasonably satisfied based on credentials, based on your own test of how this worked. You're satisfied with it, and you build it into your work, or you connect it to your work. How do you do that? You connect it on the blackboard with the chalk, or do you connect it in a computer with a thousand digit numbers? How do you build the product you're looking to build? Yeah, in a way for a mathematician in a department of mathematics, the main product is a research paper that gets published in a journal. What's the best journal? I think, well, I don't want to offend any of the other journals, but the typical answer somebody would give is the annals of mathematics. Annals of mathematics. Have you published it? I do not have any papers. Okay, you work into that. Yes, that would be nice. Okay, anyway, go ahead. So you publish the paper? Yeah, the paper is the main product, and what the paper contains in pure mathematics is an argument that's supposed to convince you that a certain formal proof of a statement exists, or of several statements. And these formal proofs are actually kind of like numbers themselves. A formal proof is something that a computer could actually check in principle. If you're very careful, you could actually make a computer verify your mathematical research paper. But the hard thing that a computer cannot do yet would be to come up with the proof. So that's actually kind of like factoring numbers. Once I have the two factors, it's not too hard to multiply them and see if I got the number I wanted. Similarly, if you have a mathematical proof, you know, you're proving something like Pythagoras theorem. 3.1416. Yeah, or you know, claiming that the value of pi or the square root of 10 or whatever is somewhere. Once I have the proof then in principle, you can write it very carefully and ask the computer to check it. It's similar to how you would write a program for a computer to run. So the mathematician cooks the food, as you referred earlier, and the computer verifies that it takes good. Yeah, exactly. There's actually an interesting question now is whether one generation from now the computers will also find the proof. You put a lot of research mathematicians out of business. Well, that's so in so many things. But let me ask you that question. Here you are, you have a mathematics department, you spend your professional time creating these products and writing them up so that other mathematicians elsewhere can see them and verify them. Where is it all going? Because you know, I suggest to you in my question, I suggest to you that like computer science, computer science is being swept into other areas. So I'm, for example, a marine biologist. I have to learn a lot of computer science to analyze my data. And likewise, if I'm a computer scientist, I have to learn a certain amount of mathematics to do, you know, push the envelope kind of computer science. So where is the pure mathematics field going? Give me a view of how it will be for your students in five or ten years. Well, the pure mathematics, you know, the purest of all mathematics in a way is the mathematical logic and the theory of sets, which is the sort of foundation for mathematical logic. But that has been stable, actually, since around the same time as during this work in 1936. I mean, some of them, the idea of what mathematics is, because actually what we do in the purest part of mathematics is we take all mathematics and we set it as itself a mathematical object and then prove things about it. It's like the laws of nature in physics, isn't it? In other words, if I say to you, you know, like, they make jokes about the end of the Internet, you know, where you get to the end. You've seen every website. You've seen every function, every piece of information the Internet ever could provide you. And then you say, well, I'm at the end. Is this going to happen in mathematics? You wake up one morning and say, well, there are no, there are no new theorems. There are no new products that I need to work on. There's no more research necessary. We have found out everything we need to know about mathematics so we can, you know, go make an egg. This is a very good question and actually this goes back a long time because back in around the year 1900, there was a great mathematician, Hilbert, who had this idea that, like you're saying, we should be able to come to the end of mathematics. So he said, let's find just an algorithm or let's find some method that will answer all math questions for us. And he said, you know, we must find this algorithm. And he was one of the leading mathematicians of the time, so people worked on that. Then there was another mathematician, Gödel, actually said, I have a proof that this is impossible. And that was really a shocking and was a big deal. So Gödel for this was, I think he was named in the Time magazine, you know, 100 people of the 20th century. They included Gödel for this saying that we cannot, we can never come to the end of mathematics. That was his result. And Turing with his Turing machine, those were actually the two mathematicians that were included there. So he actually, it's very strange. It came from an analogy with this, you know, the sentence that says this sentence is not true. This sentence is false. If I say that, is that true or not? Because if it's true, well, it says that it's false, if it's true, it's false. It's logic. Right, and if it's not true, then it's false, but it says that it's false, then it must be true. It's all those puzzles we did in school. Right. But he was able to use that to make something analogous, something like, well, suppose that, you know, you can prove anything you want to prove. Any mathematical question that you want to have answered, you have a method for finding answers. He made a sentence, something like, this sentence is not provable. But it wasn't just in a sentence, it's not just in English, like I'm saying it. It was encoded very precisely into mathematical system. And then he proved that, you know, this sentence will end up being true but not provable. So it says it's not provable. It's true, it's not provable. You could spend a lot of time ruminating about that. It was actually very important. I mean, it was sort of almost silly, almost like a joke solution, but it was very important because it means that, you know, in a way, you know, mathematics will never end. There is no way, if I have a very difficult question, yeah, actually a computer will in general not be able to find it. Now, how do I reconcile that with what I said earlier, that maybe computers will take over the work of professional mathematicians in finding proofs? Well, so what Gödel proved is just that, you know, there's no perfect method. But it could still be that in practice, you know, computers could do a lot. Do you ever see The Beautiful Mind with Russell Crowe? I think it was Princeton. Mathematics at Princeton, wasn't it? Yeah, I think I saw that. Yeah. He was slightly off, actually. I mean, the character. They portrayed a real mathematician in the movie. And I guess my question is, are mathematicians different from other mortal people? Well, you can make your own opinion. Maybe it's hard for me to make an objective opinion, right? There are some jokes, you know, there's a joke that says there's two kinds of mathematicians. So there is the one that, you know, when they're talking at you, they look at, they just look down at their own shoes. And the other kind, they're very different. When they talk to you, they look at your shoes. So, Bjorn, which kind are you? Yeah, hopefully I'm the one that looks at your shoes. All right. Okay, Bjorn Juss Hansen, Mathematics Professor at UH Minnow. We've only begun this discussion, and he's looking at my shoes. Thank you so much. Thank you for having me on the show, Jay. It was a great discussion.