 Welcome to a new episode of the ITU Journal webinar series, where you can find insights and forward-looking research on future and evolving technologies. The ITU Journal is an international journal providing complete coverage of all communications and networking paradigms, free of charge for both readers and authors. This publication considers yet-to-be-published papers addressing fundamental and applied research, building bridges between disciplines, connecting theory with application, and stimulating international dialogue. Its interdisciplinary approach reflects ITU's comprehensive field of interest and explores the convergence of ICT with other disciplines. We count on your support to make this webinar an interesting experience. Please submit your questions via the Q and A channel at the bottom of your screen. All questions from the audience will be taken during the Q and A session after the talk. The meeting is being recorded and the recording will be made available on the webinar website. Closed captioning is also available for this event. You can enable this by clicking on the closed caption icon at the bottom of your screen. We hope that you will enjoy the talk and we encourage you to stay connected until the end for the wisdom corner. I will now give the floor to our master of ceremonies. Hello and welcome to the new series of webinars with academics of the IT Journal of Future Inevolving Technologies. My name is Alessia Magniargiti from ITU, the International Telecommunication Union, United Nations Specialized Agency for Information and Communication Technologies. It is my pleasure to open today the webinar with Professor Andrea Goldsmith from Princeton University. We count on your support to make this webinar an exciting experience, so please submit your question via the Q and A channel. We will address them to our speaker during the Q and A session. And after the Q and A, please stay online. I will moderate the wisdom corner, live life lessons. Professor Goldsmith agreed to a personal chat, so she will share with us some lessons learned over the years that might perhaps be useful for some of you. I'm very pleased now to introduce our moderator, Professor Iana Kildis, editor-in-chief of the ITU Journal and founder and president of TRUBA. Three years ago, with Professor Iana Kildis, we launched this new scientific journal and we are now moving towards impact factor. Professor Iana Kildis is Ken Bayer's chair professor in telecommunications emeritus at the Georgia Institute of Technology. In the last two decades, he established many research centers around the world, including South Africa, Spain, Saudi Arabia, Finland. He is editor-in-chief emeritus of impact factor journals, highly cited and at the top of the most prestigious international ranking. Ian is visiting distinguished professors in several universities around the world. So Professor Iana Kildis, the floor is yours for your opening remarks to introduce our speaker and moderate the Q and A session. Thank you. Thank you, Alessia. Good morning, good afternoon and good evening worldwide. From Atlanta with love, I again welcome you all to the fourth season of our ITU Journal, Future and Evolving Technologies webinar series. In the first two seasons, we had the research leaders from the academia where we had the leaders from the industry in the third season. We are fortunate to have the top scientists lined up in the season. Before I present our speaker, the Honorable Dean Andrea Goldsmith, I would like to briefly talk about our journal. It was mentioned a couple of times today. ITU Journal and Evolving Technologies in short form, JFET. The objective of our journal is to bring the academic and industrial worlds together in order to establish a strong bridge between the academia and industry. Our journal ideas were incubated back in December 2019 and the inaugural issue came out in December 2020. It is an open access journal, no fees for the readers, no fees for the authors. The papers go through a review process and we try to cover all forefront research activities in the world, both in the academic industry. I will encourage you all to submit your papers and also if you have ideas for special issues, please do not hesitate to contact us. Today I have the great pleasure and honor to present you our today's speaker, the Honorable Dean Andrea Goldsmith. First of all, I must personally thank her for taking time from her very busy schedule and accepting our invitation to deliver this distinguished seminar. Before going through her bio in detail, I must mention some preamble opinion. Andrea is maybe the top scientist currently. She made many, many research contributions in all wireless communication areas the last 25 plus years. In my opinion, she's a role model for many women engineers and overall for all worldwide other researchers in my opinion. A stellar career and we expect that she will continue our outstanding contributions. She contributed so many pioneering results in wireless communications that the list is very long to cover here. Her total number of citations is 80,000 and her age index is 100. Andrea is currently Dean of Engineering, Applied Science and the Arthur Legrand Dottie Professor of Electrical and Computer Engineering at Princeton University. She was previously Stephen Harris Professor of Engineering, Professor of Electrical Engineering at Stanford University, where she is now Harris Professor Emerita. She founded and served as Chief Technical Officer of Clume Wi-Fi, formerly Accelerator Incorporation and of Quentinna QTNA Incorporation. And she currently serves on the Board of Directors for Metronik and Cron Castle Incorporation. So she didn't have only academic career, also she had excellent success in entrepreneurial activities. Her research interests are information theory, communication theory and signal processing and their application to wireless communications, interconnected systems and neuroscience. Andrea received numerous very high prestigious awards in her career. She's a member of National Academy of Engineering and the American Academy of Arts and Sciences, a fellow of the IEEE and of Stanford and has received several awards for her work, including IEEE Sumner Technical Field Award, ACM Athena Lecture Award, the Kamsa Armstrong Technical Assuming Award, Kirchmeyer Graduate Teaching Award, WICE Mentoring Award, Silicon Valley San Jose Business Journal's Women of Influence Award and also ACM Sigma Sigma Bill Award. I was on the committee for that award and we were very happy that she was even nominated. I remember like yesterday. She's the author of the book, Wireless Communications and co-author of the books, MIMO Wireless Communications and Principles of the Cognitive Radio, all published by Cambridge University Press. She's also inventor on 29 Pathens, it's surreal, excellent really. And she received BS, MS and PhD degrees in electrical engineering from University of California, Berkeley. And also in terms of service, Andrea served IEEE in many capacities. She is currently the founding chair of the IEEE Board of Directors' Committee on Diversity, Inclusion and Ethics. She served as president of the IEEE Information Theory Society in 2009 and founding chair of the Student Committee and as founding editor-in-chief of IEEE Journal on Selective Areas in Information Theory. So I'm getting tired of my continuity. She also served on the Board of Governors for both the IEEE Information Theory and Communication Societies at Stanford. She served as chair of Stanford's Faculty Senate and for multiple terms as a senator and on its Academic Council Advisory Board budget group. Committee on Research Planning and Policy Board commissions on graduate and undergraduate education, Faculty Women's Forum Steering Committee and Task Force on Women and Leadership. And also very important to mention, although she doesn't mention that in her biography, that Andrea is serving on the Scientific Advisory Board for President Joe Biden. I personally expect she will be a senator soon, in my opinion. To talk all about her all achievements, we need maybe several hours. So without further ado, again, let us express our sincere thanks to Andrea for accepting our invitation and giving this webinar. Now I wish you all an enjoyable and productive time with Dean Andrea Goldsmith. OK, can everybody see my slides? Wonderful to be here. Thank you so much, Ian, for that very kind introduction. And I'm really looking forward to speaking to all of you about the ways that we're going to disrupt the next generation of communication systems. As Ian alluded to, I've been working in this field for quite a long time, in fact, since the mid-80s. And I actually think this is one of the most exciting times for wireless communication over the last many decades. And that's because we're really moving into a new paradigm where in previous generations, we were really focused on people talking to people. That was the first generation of cellular communications to getting text messages and then accessing information from the internet. What's changing with this new paradigm and what we envision is device to device communication, where basically everything with the non-offs, which is going to be able to communicate with other devices, as well as send information to the cloud for processing in the cloud. And that's going to enable a whole range of very exciting applications, such as in-body communication, where you have your medical information about your heart rate and your blood pressure monitored continuously. And if there's some reason that you need to inform the doctor, it goes out to the doctor without you ever having to step foot in a hospital or a clinic. Obviously, we've been looking at automated highways for quite a long time. And we're getting closer and closer to that scenario where we will no longer sit in traffic. We'll have much more efficient roadways and much more energy efficient use of cars and mobile vehicles, even in the sky. We will have sensors embedded throughout infrastructure that's going to enable us to deal with climate change much more effectively so that we know when a particular infrastructure, whether it's a bridge or a highway or building is compromised or in danger of some kind of collapse. And also when we see fires, wildfires or floods or other kinds of natural disasters, we're going to be able to monitor them a lot more effectively and get people out of harm's way automatically. So these are just some of the very exciting scenarios for the future of wireless communication and why I think this is one of the most exciting times for the field. Now, many of the enabling technologies for the applications that I just described were supposed to be part of 5G. In particular, we talked about getting data rates on the order of tens of gigabits per second, particularly using millimeter wave communication, having low power radios that could last for a decade on a single AAA battery and extremely high reliability. For those of us that worked at the labs in the old days, we talked about the five nines, which meant that the probability of failure of, you know, was one to the 10 to the minus five. Well, that's certainly not true in our wireless systems, which makes it very difficult to use them for things like automated cars or even for sending important medical information. So the promise of 5G, which we have already today, was to enable all of these different requirements with a single network. But in fact, as Jonathan Swift likes to say, promises and pypress are made to be broken. And that's particularly true in wireless standards, because if 5G is not able to support the kinds of requirements that we need for this future vision of wireless communication, then that means all of us researchers and commercial companies get to start working on 6G to enable the things that were promised for 5G. And, you know, when people ask me, so what is 5G and how is 6G different? These terms are really marketing terms. There's no such thing as 5G or 6G. It's just the evolving standards for the cellular systems, as well as the other wireless systems like Wi-Fi that will enable the kinds of wireless connectivity that is needed for the applications that I just described. So what are the enabling technologies that we're going to need for the next generation of wireless networks to support very high data rates, very high reliability, and very low energy consumption? So to start with, we really need to rethink cellular system design. If this was a live audience, I would ask the question, which I often do, and that's when was the concept for cellular communication originally developed? The notion that you would have a city and you would break it up into these cells, in this case, hexagonal cells that would blanket the city without overlap, and then you put a base station in the middle to talk to the cars driving around. Well, that concept was actually written up in a paper by DH Ring in 1948, the same year that Shannon came out with his landmark capacity paper. So the notion of cellular systems has been around for many, many decades. But we haven't really taken a step back to say with all the technology evolution since the late 40s, should we really rethink the way that we design cellular systems? And I'll talk about that a little bit later in my talk. We've certainly evolved technologies for cellular in terms of having multiple antennas and having small cells and distributed antennas and base station cooperation. But the notion of a cellular system that you've got these cells that don't overlap with base stations in the middle has not really evolved. We haven't gone back to rethink how we design cellular systems with all these new technologies. So I'll talk to that a little bit later in my talk. We also need to use more spectrum. So millimeter wave is already part of 5G communications. But as we've seen, millimeter wave is very challenging in practice. So it's wonderful in theory. There's tens of gigahertz bandwidth. But in practice, in order to close the link at millimeter wave, let alone terahertz spectrum, we need to use massive antennas. And so in theory, if you've got multiple antennas, you can point the beam directly at your receiver and close the link very well. But in practice, you get buildings that are in the way of that beautiful pencil-themed beam. You have cars that are moving around and you need to estimate the channel in order to point the beam. So we've had a challenge in 5G being able to utilize millimeter wave spectrum ubiquitously. And that's a challenge for the next generation of cellular. Now, coming to the network itself, when I started my second company, FluWifi, the idea was to use software in the cloud to manage small cell base stations because they're very dynamic. And if you do optimization in the cloud, you can use the resources far more efficiently. We still haven't seen a huge increase of small cells. And we also haven't seen these kinds of optimization algorithms. They're starting to emerge, not necessarily in the research arena, but we see companies that are developing techniques in the cloud to optimize the use of small cells and large cells. And machine learning is an interesting technique to use there as well. In addition to the optimization of resources, there's other aspects of the network design that really hasn't been taken into account by researchers and wireless, things like security and privacy and resilience. These are usually handled at the application layer, which is a mistake because if we can build these things into the networks themselves, they work a lot better. So I touched on machine learning for optimization, but in fact, machine learning can be used across all layers of the network design. And that begs the question is too, well, can't machine learning enable everything? Why do we even focus on any of these problems? We can just throw the design problem at some big, massive machine learning algorithm, and that will solve all the challenges that we face in next generation networks. So why don't we use machine learning to solve these problems? Of course, machine learning is a bit of a bandwagon today. We've seen machine learning have tremendous impact on certain types of problems, including image recognition and voice recognition, game playing. These are scenarios where we really didn't have very good models for voice or images or game playing, and machine learning works very well when you don't have good models for things, as I'll talk about a little bit later. The question is, if we think about machine learning as a panacea for all of our wireless network design, should we just jump onto this bandwagon and say we should only work on machine learning, or should we run away screaming and saying this is a lot of hype? And really, it's not gonna solve a lot of the challenges that we face in the next generation network design. So when I started looking at machine learning for wireless communication, this was probably five, six years ago, my initial thinking was to run away screaming. But I happened to have a brilliant post-doc, Nariman Farsad, who was working on molecular communication, and he said, we should take a look at using machine learning for equalization in molecular communication, because there's no good channel models. So that kind of started me on the journey of looking at machine learning at the physical layer. And what we found in our work there was that machine learning can actually beat the theoretically optimal equalizer, which is a biturbi decoder, when you have imperfect channel knowledge, or you have complexity constraints. And we also looked at using machine learning for encoding and decoding with some interesting results, particularly in joint source and channel coding. So my results there proved to me anyway, that machine learning is a technique that we should be taking a look at, that we need to investigate as one of the tools in our toolbox in designing wireless networks. I alluded to the network resource allocation problem. This is a problem I've worked on from the very beginning of my academic career, going back to the 90s. Frequency allocation in second generation cellular systems is a graph coloring problem. It's NP-hard. We've never really been able to solve it well. And now we have much more challenging resource allocation problems. So machine learning may be a way to solve the attractability of those kinds of analytical problems through a different approach, which is using data. And obviously when you're looking at network resource allocation, you're gonna have a lot of data if you're a cellular operator because you know exactly what the traffic has looked like and how you've allocated resources in the past. So this is another great area where I believe machine learning can make a big difference. Machine learning also has a role to play in security, privacy and resilience. We're already seeing machine learning at the application layer play a role in both security and privacy. And I'll talk about resilience a little bit later because it's something that we don't usually think about in our design of wireless networks. For example, what do we do if the entire electric grid goes down? We don't have a way to design for that in our networks today. And finally the notion of cross layer design, how do we take applications and design them for the vagaries of the wireless network below? This has been a challenge, vexing communication theorists and information theorists and wireless communication engineers for a long time that we haven't really gotten a good resolution with analytical techniques. Shannon said you can separate out the coding and the source coding and the channel coding without loss of optimality. And even though that isn't true in practice it's still the way that we design networks but maybe machine learning will teach us something else. So all of these lend me to actually jump on the bandwagon to the point where I co-authored a edited book on machine learning and wireless communication which came out recently talking about how do we use machine learning in the design of wireless networks and also how do we use wireless networks to enable distributed machine learning which I'll talk about a little bit later in the talk. Okay, so when we think about next generation wireless it's interesting that physical layer and Mac layer techniques have been declared mature or dead for many decades. If you look back in 1971, that was the time when the information theory community had the coding is dead workshop where it was declared that coding was dead and Erwin Jacobs got up in the back of the room with a small eight bit microprocessor and said it's not dead because of processing. So coding obviously wasn't dead in 1971. In the eighties before cellular networks were deployed in the mid eighties it was assumed that wireless communication was mature and when I was at Bell Labs in the nineties all of the researchers at Bell Labs who had worked in wireless in the eighties had been moved to fiber optics and then came back to wireless after the cellular networks were deployed. So obviously the eighties saw the rise of cellular networks and wireless was far from dead at that point. In the 2000s when we got to 3G networks again it was viewed that the technology was mature and there wasn't a lot of research going on not a lot of investment by the government in the 2000s for third generation wireless or wifi and then the iPhone came out and brought the AT&T network to its knees within a day and then all of a sudden there was a big rush to develop 4G wireless so it wasn't dead in 2000s. And I would say even in the mid to late 2010s so over the last five, six years it was also viewed as very mature technology but then the Chips and Science Act came out with $30 billion of investment for research in chips and science and technology including wireless communication. So we're back in an era where there's very exciting research to be done at the physical and back layer as well as at the higher layers of the wireless networks to enable the vision that I painted at the very beginning of my talk. So it's not dead. Let me talk a little bit about waveform disruption because I think this is a really interesting topic in terms of analyzing why we thought we were done with OFDM. So if you go back to the second generation of wireless this is showing my age because when I was a young professor in the 1990s we were just rolling out the second generation of cellular systems which was the first digital system. One G was analog communications. So there were competing standards for second generation cellular. TDMA was one of the standards, FDMA. And then in Europe we had OFDM for second generation cellular. By 3G, CDMA had become the global standard and that was doing primarily voice and low rate data. That was the network that the iPhone brought to its knees. And so in the middle of the 2000s there was a huge push to create the fourth generation of cellular and the basic physical and Mac layer techniques for and 5G is OFDM or OFDMA where we're able to enable high speed data plus voice. Data is the primary driver. And the reason OFDM is so effective there more effective than CDMA is that you're able to adapt in very small time frequency blocks to the channel variation. So if you have a good channel in a small time frequency block, OFDMA allows you to use that block and blast out lots of data. And in other blocks where you have weak channels you don't send as much data. So that's the notion of adaptive modulation across users and over time. Now, why is that not the right thing to look for in the next generation of wireless? It is because it requires adaptation. It requires you to model the channel, estimate the channel at the receiver and feed that estimate back to the transmitter fast enough to be able to adapt. And as we go to very high speed data or very low complexity wireless devices that won't necessarily be able to close this adaptive loop we might need a new modulation that doesn't require adaptation. And so what or at least not fast adaptation or high complexity adaptation. So one of the waveforms that we're thinking about is maybe not TDMA or OFDM or CDMA but something in the middle. And I've done a little bit of work on OTFS or Thionole Time Frequency Modulation which is at the intersection of these different techniques. And it actually is designing the modulation in the delay Doppler domain which changes much more slowly than the time frequency domain. So if you're trying to adapt in the delay Doppler domain you do not need to estimate the channel as fast and you don't need to adapt your modulation as fast. So where does the delay Doppler domain come from? It's called the ZAK domain in transform language. And there's a ZAK transform that allows you to go from the time frequency domain to the delay Doppler domain and back again. So you can use the design of your system in the time frequency domain including an OFDM system and then just apply this transform to get to the delay Doppler domain where things are changing a lot more slowly. And the other benefit of this delay Doppler domain is that your symbols are spread over the entire time frequency channel. So you obtain the full diversity of the channel without adapting. When you look at OFDM it's actually better from its capacity achieving from a Shannon capacity perspective but it's only capacity achieving if you can optimize by adapting to the very quick variations in the channel. If you can adapt fast enough to keep up then you actually do much worse than other forms of modulation including OTFS. While you're not trying to adapt you're just taking advantage of the full diversity over the entire channel which by the way was something similar to the argument that CDMA made back in third generation systems that by spreading the signal over the entire frequency bandwidth you would get frequency diversity which is true but they weren't taking advantage of the time diversity as well. OTFS takes advantage of both. And you can see here these performance results would show that under short packets you get significant benefit in terms of the bit error rate using OTFS versus an OFDM system that isn't able to adapt to a 30 kilometer per hour Doppler and for long packets where the channel is changing quite a bit over the duration of the packet in fact OFDM goes to a flat bit error rate so it's basically an unusable channel if you are trying to send too high a data rate over a channel that you can adapt to. So it's not necessarily the case that OTFS is gonna be the physical layer standard for 6G but I think what these results indicate is that there is room to be thinking about different modulation techniques at the physical layer that will perform well on channels that we can't necessarily estimate quickly and adapt to as we are doing now in 5G communication. Okay I wanna touch a little bit on the machine learning for receiver design. So why don't we even think about machine learning? So this is gonna go through a couple of slides on the work that we did here. The picture that's at the top of the slide here is what you would see if you opened any book on communication going back to the 60s. So we have an analog channel with additive white Gaussian noise in the receiver you convert to digital and then you do receiver design and at the transmitter you're just basically taking your input symbols and converting them through modulation to a format that can be sent over the analog channel. Now why should machine learning come in here? Well, we may not know the channel H of F at all. That was the case when we were doing the millimeter wave communication. We also may not be able to estimate its parameters even if we know the channel that is parameterized by dynamic channel parameters. Even if we knew the channel perfectly it may be that doing equalization for example or doing decoding is too complex for a low energy receiver for example. And so Viterbi decoding is optimal but if you have a complexity constraint on Viterbi decoding it actually works worse than other kinds of techniques for equalization like zero forcing. So zero forcing is suboptimal unless you have a complexity constraint and then it's works better than Viterbi decoding. So how does a machine learning based receiver solve this problem? Well, first of all, you don't need to know the channel or its parameters. You learn the receiver design directly from the data and the solution it turns out is robust to estimation error because you're not trying to invert the channel you're just trying to equalize the channel based on a machine learning approach using a lot of past data. So that makes it the technique as we'll see in a moment more robust to estimation error. The challenge though is that this requires a large amount of training which means that you need to do a lot of training of your message in order to develop the machine learning based receiver and if the channel changes you need to train all over again. So this has a lot of overhead particularly in a channel that's changing quickly. So the way that we solve this problem was with something called Viterbi net where rather than doing the entire system end to end we just put the machine learning into a standard Viterbi algorithm where the machine learning was just trying to learn the unknown probabilities of probability of Y given X. So we use machine learning and training to learn those probabilities that requires a lot less training than training of the end to end system. And you'll see that it was very robust to estimation error. So if you look at the numerical results here you see that Viterbi decoding which is the very top curve with the black X's basically doesn't work at all when you have uncertainty in the channel because the Viterbi decoder is trying to invert the channel and if you don't know the channel that works very poorly. If you know the channel perfectly then Viterbi decoding is optimal. So that's the red line. Viterbi net was able to match the performance of the Viterbi decoder if it had perfect CSI so you're not getting any loss in performance using this machine learning approach to learn the P of Y given X if you can learn them perfectly. So there's no loss in performance of machine learning over Viterbi decoding under perfect channel estimates but when there is estimation error the Viterbi net, the green curve performs almost as well as optimal CSI and much better than the Viterbi decoder with channel uncertainty. So the mega lesson or the takeaway lesson I think from this work is that applying machine learning can be very powerful particularly if you use the domain knowledge of what's the optimal design based on analytical approach and then use your domain knowledge of Viterbi decoding is optimal coupled with knowledge of how to use machine learning in the case where the channel model is not perfectly known or the parameters are not perfectly known and that's what we did here. Now, taking a step back from using machine learning directly to solve particular aspects of the five layer design we can actually think of neural networks or machine learning as a communication system. So this is again the classic communication system block diagram even in Shen's original paper. If we think of machine learning as a communication system it's a little bit different because we're using the training dataset to try to learn the parameters of our neural network. So the way machine learning works is that you have the input that you're putting into this multi-layered network and the output X hat is the output of the machine learning algorithm. So you can think of the neural network as a communication channel with input X and output X hat. What the training is doing is training up the parameters of the neural network. So you're taking this massive amount of data and compressing it down into the weights and the biases of the neural network. So you can think of training a neural network as a compression problem which hasn't really been done and presumably some of our insights about compression that we've learned in communications and information theory might apply to better ways to train up neural networks. And once you've trained the neural network then you just run it as a communication system. So this perspective on machine learning as a communication system may help us not only design better neural network algorithms but perhaps more importantly come up with performance bounds and better understanding of why neural networks work as well as they do in certain applications. Okay, coming back to wireless communication I've talked about the fact that if we want to communicate particularly in high frequencies we need to use massive MIMO because that basically gets around the problem of attenuation that the power of a transmitted signal falls off as one over F squared but that's only when you have omnidirectional attendance. When you're able to point your beam directly at the receiver you no longer have that one over F squared fall off. And so not only does it solve the problem of attenuation in high frequencies but it also solves the problem of fading because now I'm pointing my beam directly at my receiver so I don't have any reflections that are coming in delayed in time and shifted in frequency which cause fast fading or multi-path fading. And I also don't have any interference because any transmitter that is also using these very thin pencil beams are not interfering with any other receiver they're only transmitting directly to the desired receiver. So all of the challenges or many of the challenges that we faced in designing these physical layer techniques go away with massive MIMO in principle. But the problem is that first of all there's practical bottlenecks we have to be able to estimate the channel accurately and quickly and this is a much harder estimation problem because we have hundreds of transmit antennas and maybe multiple receive antennas as well we need to estimate the channel for every transmit antenna receive antenna pair. The complexity of doing massive MIMO is very high. So you can do it in the base station but even now we don't really have processors that can handle more than about eight or 16 antennas in a large scale system. And so complexity and channel estimation are bottleneck but the other bottleneck is the laws of physics. So if you get a building or a car or a person that happens to block the line of sight between this massive MIMO array and its receiver that will scatter the signal everywhere and you no longer close the link. And this is the reason why we have not seen massive deployment of millimeter wave systems in 5G even though it's part of the standard it's because the massive MIMO or any MIMO antennas are not able to solve in practice this propagation challenge. So that might call out for multi hop or mesh networking so that you can always close the loop maybe not directly line of sight because there's something blocking the way but through multi hop networking. But of course multi hop networking has been a big challenge in practice because every hop is half duplex and therefore you lose half of the data rate every time you hop and it's challenging to design a dynamic resource allocation of frequencies to different users in a mesh network. So one of the things that we worked on with the multi antenna challenge was the channel estimation issue. If you're trying to do channel estimation at millimeter wave frequencies with any kind of Doppler it's very difficult to estimate the channel especially for larger breaks. So what we did is we looked at blind MIMO decoding is there a way to decode the symbol that was transmitted without actually estimating the channel. So the received symbol is Y the transmitted symbol is X A is the massive MIMO antenna array channel gains which is very large is there some way to estimate X from Y without knowing A and we were actually able to do this using a technique called vertex hopping for certain types of constellations not for MQAM but for MPAM or BPSK where you only have one degree of freedom the source has to be a hyper queue and we assumed antenna rays from two to 12 elements and so the decoding problem statement is can you estimate the channel A and recover X with a small number of samples of just the received symbol Y not any specific estimate of the matrix A. And so the way to think about this is that we have this received vector Y of symbols and we want to rotate Y using some rotation U to get back the original symbols X and we formulated this as an optimization problem where we're maximizing under the different rotations U the log of the determinant of U subject to the constraints on the symbols that were transmitted. So which symbol set are we looking at? It turns out this is a non-convex optimization problem and we know how to solve non-convex problems we can use gradient descent but that's very slow so instead we propose this vertex hopping algorithm which is using concepts based on solving mixed integer linear programming problems. The runtime performance you see vertex hopping is very fast gradient descent is very slow and doesn't always converge in fact at some point if you have too many antennas gradient descent is so slow that it doesn't converge. Vertex hopping always converges it has a pretty high probability of success and you see that kind of in the next set of performance curves where we see that the maximum likelihood which is the gradient descent doesn't converge sorry maximum likelihood doesn't converge because the complexity is too high the gradient descent has good performance but it's very slow and the vertex hopping works almost as well as gradient descent but it's much faster. So again this is kind of another meta concept of saying that we can actually use different techniques at the physical layer which don't require channel estimation to get very good performance. Okay so now I'm going to take a step back and talk about rethinking cellular system design I already mentioned the fact that cellular system design was done in the 1940s and so as we think about are there new ways to do cellular system design based on the recent technology advances of having base stations talk to each other which raises the question of what is the cell if you have base stations that are cooperating by the way base station cooperation was part of 3G as well as Ymax which was the predecessor to 4G but it never really gained traction because the gains were small it was only less than a factor of two gains for a very high complexity to have the base stations talk to each other so it was kind of abandoned has not been revisited even though I believe that the base station cooperation that was looked at in 3G and 4G really didn't rethink cellular system design it just said well keep the cellular system design as is but we'll have the base stations talk to each other rather than kind of completely rethinking what is a cell when you have base stations that are talking to each other also thinking about distributed antennas or small cell communication these are all things that we haven't really rethought our design of cellular systems in the face of these new kinds of techniques and so I think there's a lot of room in thinking about cellular system design to say maybe if we're looking at distributed antennas or if we're looking at small cells or base station cooperation we should completely rethink the way we do our system design and part of that is this dynamic self-organization so what do I mean by dynamic self-organization? Well small cells are really the solution to increasing cellular system capacity that goes back to research I did in the early 90s where we show that you can get exponential capacity gain by shrinking the size of the cells how come then we haven't seen a proliferation of small cells there's still a relatively small number and I think part of the reason is Wi-Fi so Wi-Fi really did become the small cell and we haven't needed outdoor systems to have a large number of small cells but that's gonna change with this next wave of wireless because not only do we need significantly increased capacity for things like virtual reality and high performance systems but also these small cells are gonna be closer to the end user and therefore utilize less energy in the transmission once we have these small cells we wanna optimize both the small cells and the large cells we'll always have large cells because when you wanna roll out a new system you blank at the entire geographic region with coverage and you do that through large cells first so we're always gonna have hierarchical networks of both large cells for coverage and small cells for capacity and power efficiency but if we roll out these small cells we're no longer gonna be able to configure them in the way that large cells are configured which is very old fashioned so if you wanna put up a base station and I know this because I'm on the board of Crown Castle basically you have workers go out and mount the antennas on the tower and then they configure the parameters they drive around and take measurements and reconfigure the parameters and once they're done they go away and they don't really go back and reconfigure the parameters again now if we're gonna have this proliferation of small cells we can't do that it's too costly but we shouldn't even be doing that for large cells we should really take advantage of optimization in the cloud to do the resource allocation of configuring the parameters of the large cells and the small cells to work in harmony in the most effective way so we can think about coming up with these cloud optimization algorithms but it's a very hard optimization problem frequency allocation alone is NP hard I alluded to that back in the second generation of systems but that was second generation systems where we had single antennas and frequency division multiple xing now we have multiple antennas we have power control on different devices we have very heterogeneous needs of the different users we have large and small cells so there's many challenges to doing this kind of optimization it's not just NP hard it's NP really hard but fortunately we have advanced optimization tools that can deal with very challenging optimization problems whether they're convex or not convex but one of the things that I think has not been looked at that carefully is this notion of fog optimization which really applies directly to cellular systems where you have isolation based on geography so if you're looking at a set of small cells and large cells if they're in one geographical region they're really not gonna interfere a lot with cells in another region so I'm here sitting in Princeton the configuration of macro cells and small cells in Princeton are really not gonna interfere with the ones in New York or even in Edison on the way to New York so why are we doing cloud optimization across all of the cells in the region we can split off the small group of cells that interfere with each other and optimize those in what's called fog optimization so it's not fully distributed it's not fully centralized it's something in between and ML can also play a role in these optimization techniques as I alluded to earlier in addition to the optimization of the resources we also have to think about optimizing caching of the data at the edge and edge computing so I'm not gonna talk about that here but that's a very interesting set of challenges as we look towards automation or doing AR VR or doing biomedical applications where we wanna send data to the doctors only in the case where it's highly relevant and so you wanna do edge processing before you forward it along okay so talking about fog optimization versus centralized optimization this is some research that we did showing that if you're doing the optimization in a centralized manner that's always optimal but if you split it up into virtual cells you can actually get almost as good performance with much lower complexity and so you see that there's a 10x loss in doing fully decentralized performance versus fully centralized but having a small number of small cells actually allows you to get almost as good as the fully centralized result but with a lot lower complexity and the same is true when you're doing single user decoding so both of these results show that this fog optimization is very powerful I'm gonna spend the last few minutes talking about some high level techniques so if you think about doing resource optimization in the cloud there's other things oops sorry there's other things that we can do in the cloud for all kinds of different networks and this is gonna be essential for the next generation of wireless where we're bringing together cloud and wireless and backbone networks we really need to take advantage of the cloud to optimize these networks and why is that essential? Well, if you're thinking about future applications like autonomous driving you're gonna need to use past data processing in the cloud to process all that past data but also real time estimates, real time control at the edge or in the devices themselves that's gonna be true for many IoT devices as well and telemedicine which I think is one of the most important applications requires not only collecting massive amounts of data from the sensors that are inside people but using past data from a particular user as well as medical data that crosses countries and hospitals and diseases to understand how to use the personalized data that you're getting from a particular patient how you would use that data and what you would actually send to your doctor. I just wanted to briefly mention this notion of security, privacy and resilience. So we've always been worried about link failures in wireless communication that's what fading does or when you go into a deep shadowing but we haven't in commercial systems really worried much about jammers or malicious agents that are trying to take away team robotics control activity and eavesdroppers in terms of security is something that we've always handled at the application layer let alone the notion that the entire grid could go down. So how do we think about all of these different challenges as a wireless communication system designer and I'm just gonna go through a couple of quick ideas. So one of the areas that we've looked at is doing centralized detection with flaky links and the idea is that we have the sensors collaborate within predefined clusters and then whichever sensor in the cluster has a good connection to the fusion center is able to send the collective data. So this is one way that we can get around the notion that we have compromised sensors or compromised links to use group dynamics and group calculations collaboration to have much more robust performance. So these are the defined sensors and you can see that through this we get much better performance. We're also looking at something similar when we do federated learning. So here we are trying to learn a particular aspect of the network that we're sensing but we also want to impose privacy on the different devices so that you don't necessarily give away your personal data in this federated learning technique. And so we've been looking at ways to do federated learning when the links are flaky and also you're trying to preserve privacy constraints and so we have some results on that. And the last area that I think is really interesting is when you're doing some kind of team effort like suppose you have a set of drones that are trying to identify a target or trying to identify some dangerous situation that you want to mitigate the danger. If you have malicious agents that are trying to derail the activity of the robots then how do you know which agents should be trusted and which ones should not be trusted and ignored in this group activity of doing a search and rescue, for example. So we've looked at consensus algorithms where we can converge almost surely to the performance of having all of the nodes being trusted by identifying through the performance of the algorithm which nodes are actually diverging from the collective behavior and therefore which are likely untrusted agents that should be ignored. So this is I think an interesting technique where you take advantage of the location of the users the kind of data that they're sending to determine whether they're trusted or untrusted and then you have an algorithm that automatically kind of boots them out of participation when you deem them untrusted. And again, I don't have time to talk about these but they are interesting new areas for technology disruption. So some of the challenges to next-geed disruption beyond the technical challenges are current networks are very complex we have not invested in wireless technology as governments for a decade because it was viewed as mature. Standards processes can stifle innovation. The hardware and software is generally proprietary and closed which makes it difficult for research to actually test out researchers to test out their ideas. And finally, the people that work in wireless communication don't talk to the backbone network designers or the people that are building the electronics for the application. So that creates these silos which prevent big breakthroughs. It is a new era for U.S. technology innovation. I'm on Biden's council of advisors on science and technology so I was privileged to be able to attend the signing of the Chips and Science Act where we have $30 billion in the United States going towards research in areas such as wireless. So it's a very exciting time to basically execute on the ideas I've presented and others. I think another thing we should keep in mind is that we have many billions of people that are unconnected. So it's not just about bringing better performance to the people that are connected but really ensuring that the next billion who suffered greatly, particularly during the pandemic by not having connectivity are connected. So this is not a hard problem to solve in terms of connecting the unconnected. We have the technology to do it in cellular, in satellite, low-cost hardware and devices. So we have the technology, but what we really need is governments to commit to connecting the unconnected and investing the money and deriving the policies that will ensure that we can bridge this digital divide. So let me wrap up here and just say it's been a pleasure presenting this to this group. It's a very exciting time for wireless technology. These technologies will enable new applications that will change people's lives. Future wireless networks must support higher data rates, extreme energy efficiency and low latency. We also have to keep in mind security, privacy and resilience. There's many challenges ahead which makes it a very exciting time for wireless technologists and we should keep in mind the killer app of connecting the next billion. So with that, let me wrap up and I'm happy to take questions. Thanks a lot, Andrea. Fantastic talk. And we have one question and I encourage other participants to use the QAN session. I mean, part to put your questions there please. Edwin is asking, do you see the reconfigurable intelligent surfaces? That's another topic, right? Playing a role in the next generation of wireless systems. So the reconfigurable surfaces I think are related to this massive MIMO challenge. How do we create these surfaces that can channel the energy or emit the energy directly to the devices that we want to talk to? So it absolutely is an important component of NextG in the same way that massive MIMO is. I think the real challenge for massive MIMO and reconfigurable surfaces is how do we get around the physical propagation challenges? So it may very well be that the reconfigurable surfaces are much lower complexity than doing massive MIMO analog or digital mass of MIMO. But there's still the challenge of how do we close the link in an environment at millimeter frequencies that's changing, that has all these shadowing effects and blocking objects. And that isn't a problem that can be solved with the surfaces itself unless there's a way to get around those objects. So I think that's an interesting topic for people working in that area. Okay. Since there's no other question, I have a question in the meantime, maybe others will state their questions also. As you know, you also mentioned that the OFDM will somehow fade out and maybe the next generation, especially for 6G, will need some waveform like orthogonal time frequencies, space modulation. As you remember that in the 4G, we were heavily using OFDM. And then I remember like yesterday that for 5G we said we need a new waveform. And then a lot of people worked on it. And then at the end, we ended up with again derivatives of OFDM, right? Cyclic prefix and also DFT or OFDM. So what do you think that, and also some people are also pushing for NOMA as you know, that could be another waveform design, right? And what do you think that, I mean, you mentioned this or TFS, but do you really think that it has a somehow promise that can be used for the next generation systems? So I think that OFDM is effectively capacity achieving when you can estimate the channel, feed it back and adapt to it. And there's gonna be many use cases in 6G and 7G and 10G where that's gonna be the case. So I don't think we should look at replacing necessarily OFDM. I think we should look at adapting the waveform that we use to the particular application. And so OFDM is gonna be around for a long time. The question is how do we decide which other waveforms should be looked at and how do we actually implement them into a system that is also using an OFDM? OTFS I think is very promising. I mean, I was on the technical advisory board of Coherence of Conflict of Interest, I guess a little bit there, but I was very, I have to say I went in very skeptically about, oh, here we go, another waveform, we've heard this before, but they actually demonstrated some very impressive performance and they have continued to demonstrate impressive performance. So I think that it is certainly a candidate to supplement OFDM in 6G. The other advantage of OTFS is that there's this ZAC transform that allows you to go from OFDM to OTFS and back again. So you don't need to build a completely separate physical layer for OTFS. You can just have it as an add-on with this ZAC transform. So those are two big advantages of OTFS. Its performance is very good and it's kind of seamless to integrate into existing OFDM systems. Is that the only waveform? Is that the best waveform? I don't know the answer to that because I haven't looked at all the different ones that are being developed, but I think it's something that should be seriously considered for 6G OTFS and variants on OTFS or completely different waveforms where you're looking at applications where you cannot adapt fast enough to the channel because OFDM works terribly when you can adapt to the channel because you've got some fixed modulation in each time block. And unless you're designing the lowest modulation for the worst case channel, you're gonna have huge errors when you're trying to send more information to that time frequency block than the channel can support. And we've seen that in performance of 4G and 5G. So I think it's an area where the standards committee should be really investigating what are the different options for a different waveform that doesn't require fast adaptation for certain applications. You know, one idea could be you mentioned a lot, but just I'm repeating. It would be also good to do like AI, machine learning, boosted adaptive waveform design, right? So you can do all these learning offline and then accordingly, you can boost an adaptive waveform and you mentioned about time and space and frequency domains and then you can go back and forth and try to design a waveform. That would be really good. That's a very intriguing notion, although I have to say we did a little bit of work looking at machine learning for code design and we didn't get very far. And I think it gets back to the notion that I learned when we used machine learning for Viterbi detection that machine learning works well when you don't know the channel or the complexity is so high that you can't apply a standard technique. And that's really not the case for our good coding systems now. I mean, we have capacity achieving coding for many channels. We have low complexity decoders for many channels. And so what are we gonna get out of machine learning when applied to coding? Now, for modulation, I haven't actually looked at that problem and I don't know whether it's true that our modulation is optimal for all the channels that we have. I mean, I already said it's not optimal for a channel that's changing too fast to adapt to but does that mean that, for example, OTFS or CDMA or other variants of modulations that we already know and have used in settings where we didn't adapt going back to CDMA, right? I mean, the whole premise of CDMA was you get the full diversity of the entire bandwidth and therefore you don't need to adapt and that's why it was optimal for 3G. It's just that then the data rates got higher and the channel could support it if you weren't taking advantage of the frequency diversity. So I think that there's room for research and standardization to look at what are the modulations that we have now? Where might machine learning play a role? Which is usually when complexity is too high or the models aren't known. I think we know our channel models and we have good modulation techniques and demodulation techniques. So I'm not sure, I'm skeptical that machine learning would play a significant role in design of modulation schemes or in modulation or demodulation but I'd love to be proven wrong. It might be an area. I think there's some more questions. There is a similar question. So you already answered almost all of it but I'm just saying it for the sake of the record. Francisco Monteiro from Portugal, remember we were in Lisbon and Francisco was there too. So can you please elaborate on results and experience you had been applying machine learning to channeling code and decoding? So you already mentioned that. So I talked about that some of this was joint work with John Chiaffi, my wonderful colleague. I just won the national award from five and our joint student. Now, we did not, and actually my post-second son, we did not get very far. That doesn't mean that there isn't some promise to be had there. I'm personally not a coding theorist so I have less insight into how machine learning might play a role in coding than other areas but I think it goes back to before I would have a student delve into looking at machine learning for coding or decoding, I would say, okay, well, the meta lessons I've learned on applying machine learning is if you don't have a good model or if the complexity is too high, machine learning can make a difference. And that's true not only in wireless but in image recognition and voice recognition where we never had good models. We used in Markov models, which were terrible models. So coding, I think again, we understand the channels that we're using pretty well. I mean, one thing to think about, so for example, in channels with deep fades, we use an interleaver and then treat the channel like an additive white casting noise channel. So that's a place where we're really not modeling the channel the right way. That might be a place where there's room for machine learning to play a role. But I think that before jumping on the bandwagon, it's good to ask the question of why do I think machine learning will work better here than the theoretically optimal techniques that we've developed? Good, thank you. And then there is another question by Andersey Shachak. I hope I pronounced it right. So do you think link capacity as KPI will be significant factor for next generation if mobile user equipment or UE cannot consume it or they can consume but we have not available? So anyone in the decades that I've been working on wireless communication who bets on the fact that users won't find a way to use as much bandwidth as we give them has been wrong. And I think that that will continue to be the case. So however much bandwidth we give to users, they're going to consume more cat videos. I don't know what people do with all this bandwidth but they're not willing to pay for it now. That's the interesting thing. So people want higher data rates but they're not paying their carriers more money for the higher data rates and that's why the cellular industry is in some turmoil right now. So what we need to do is come up with new use cases for this higher bandwidth that people are willing to pay for. I don't have the answer to what those use cases are but one potential one is machine learning on your device. So if you actually wanted to run ChatGBT on your device, you're going to need a lot of data coming into the device either because it's sending you the model or you're actually doing training on your device. And we don't have yet good ways to do machine learning on our devices. So that's one potential killer app. I don't know if it will be ARVR maybe. I'm not sure people are going to be willing to pay more for that. So what the carriers are looking for are ways to make people pay more money for a service and that means it has to be something different from what they have right now. Many of the applications that I think could be very compelling for users such as automated driving or telemedicine. I think there's huge potential for telemedicine but those aren't high data rates necessarily. They're low data rates. So we have to be creative in thinking about what applications users are going to be willing to pay for. The second part of the question of whether users can consume what we have not available. I'm not completely clear on what that means but if users want higher data rates that they can't get from the cellular systems they're going to go to Wi-Fi which is what they've already done. And I think if Wi-Fi didn't exist we would have first of all the carriers would be in much better shape because they'd be the only game in town. I mean they're the only way for people to get connectivity of any kind whether it's high performance or low performance. But Wi-Fi is incredibly successful and it has a lot more spectrum than cellular. The thing is that if we can get people hooked on applications that they can do on Wi-Fi now but they can't do on cellular but they wanna do from anywhere including places where they don't have Wi-Fi then small cells are gonna grow small cellular systems, small cells that work within the licensed bands. But that hasn't happened yet, right? People are generally willing to wait until they get Wi-Fi connectivity to watch baseball games or basketball games or game playing or this kind of thing. So I think this is a challenge for the carriers and for the companies and startups and entrepreneurs that are thinking about how do we create killer applications for cellular systems that can't be done only on Wi-Fi? Thank you. And before I state the last question I just wanna remind all of us when you look back in the 80s 40 years ago when you ask people what are you doing in research and people said high speed networking like information super highway, right? And then 90s everybody said we're doing ATM networks and then later on we do MIMO and massive MIMO. And now everybody talks about there was a question about RIS we come through intelligent services. Now another question which is another topic that everybody is jumping on as you said, bandwagon, the semantic communication. So the question here is Mehdi Rahmani is asking the question how do you see the semantic communications? Will it be effectively quantified or measured in information theory domain in terms of semantic content in a message or communication channel? Yeah, I mean, I think that I find semantic communication is very interesting and it alludes to one of the things I mentioned which is joint sorts and channel coding or cross-layer networking, right? I mean, semantic communication is saying there's something that I wanna do with my communications it's not just ones and zeros but there's a particular purpose and is there a way to design a communication system better when you're taking into account what the communication is actually trying to do? We did a little bit of work on this and this is why I talked about in the machine learning context. So we did some work on this for joint source and channel coding an age old problem that I've worked on for decades and we actually found that machine learning worked better for joint source and channel coding than the optimal source code and the optimal channel code design separately. But I believe that work which was really unfortunately being graduated so we didn't spend too much time on it was mostly around the source coding because we don't have good models for source coding of this was source and channel coding of text. And so what we found in terms of semantics is that if you do distance decoding and saying, well, if what you're trying to say is the car is arriving and you say, the vehicle is here those are similar in terms of semantics and you can design a source and channel code where those coding blocks are grouped together and so it's not declared a mistake if what you get is a completely different sense than the one you started out with as long as the meaning is the same. I think that's a really intriguing thing to look at but I'm not convinced first of all that it's practical because if you think about our communication networks they have to support so many different applications. So if you're tailoring the communication network to the semantic context how is that gonna scale for all the applications that you wanna do? It may be applicable to certain closed networks like if you have a particular network or network virtualization that's more particular purpose you may be able to do that in practice but then the question is what gains are you gonna get? And I've yet to see, I mean again going back to using machine learning we beat traditional source coding and channel coding separately but we didn't actually go back and look at just the source coding problem alone. I suspect that machine learning is gonna do a much better job of source coding on text than any of the techniques we have now cause we don't have good models for text so it goes back to the meta theme that I talked about before. So I think semantic communications is a really interesting thing to look at but if you are gonna delve into it be cognizant of the fact that you may get negative results it may show that in fact joint design of the thing that you wanna communicate and the communication system itself may in the end be stymied by Shannon's Uber theorem which is that source and channel coding should be separate and even though that theorem was for a very, very specific point to point channel with you know, out of vikowski noise I don't think in practice we've yet found a scenario where that's not the right thing to do. Thank you Andrea. I think we should be done with Q&A session. I really thank you again for your time great talk and great discussion so far. And now I ask Alessia to take over for life lessons from Andrea Goldsmith and thank you again Andrea. So I'll be here. Thank you, thanks for inviting me. Really a pleasure to see you again. Thank you, thank you. Thank you very much Professor Goldsmith for this extremely interesting presentation. Thank you for moderating this session. So now it's time to move to the wisdom corner live life lessons which is based upon the idea to give a unique special angle to these series of webinars adding a personal touch. So Professor Goldsmith I would like to start with my first question which is your hard-earned life lessons or failure if you can say that that you would like to share with us that may or perhaps could help some students or young researchers that are attending our webinar. Sure. So one of the things I've learned as a life lesson is to not be afraid of failure. I like to tell the people it's good to fail early and often because then you're not afraid of it and I've had many, many failures. On a personal note I was the baby to save my parents' marriage and they divorced when I was six months old so that was my first failure. When I was an undergraduate freshman at Berkeley looking to study electrical engineering I had a terrible first year. I mean I was working my way through college I wasn't prepared for the class I was taking I certainly wasn't compared for the competitive environment and I did terrible my first year. I mean I got a couple of Cs I failed some exams. Nobody thought I should be there as an engineering student and that was partly because there were no women and there was a lot of implicit and explicit bias against women. And I persevered and I said well I'm not going to worry about what anybody else thinks I'm just going to decide if this is what I want to do. And I think that that lesson as a freshman realizing that I may encounter challenges and it may be that I fail but I have to decide whether I want to continue or not it's not up to anybody else and I'm really glad that I continued. I mean I love the profession of engineering it's been an incredibly rewarding profession so I'm glad that I learned at that young age to fail and keep going. And that same lesson came back in spades in my first startup. Quantenna was an incredibly challenging professional experience. In many ways the people dynamics the technology dynamics, the startup dynamics everything about Quantenna was challenging and we almost closed the doors of the company multiple times because we were almost out of money and yet we persevered and we went public a decade after we got our first funding and that was, you know I won't say that was the best day in my professional career because working with students is so incredibly rewarding but and being a professor is incredibly rewarding but starting a company that went public was incredibly rewarding and that never would have happened if I hadn't persevered and that perseverance in those early days of my startup was because I wasn't afraid to fail. I, you know, I left a very cushy academic job at the time I was a full professor at Stanford to start a company which I had no idea what I was doing and the chances of failure were high but I think if you fear failure you don't do bold things and that makes for a much less fun and rewarding career for me anyway. So my life lesson is to be courageous and to do bold things and to do what you love and not worry about whether you're gonna be successful or not because if you do what you love, you'll love it so that's already success and you're more likely to be successful. Wonderful, thank you so much. So never been afraid to fail. Yes. That's a great lesson but can you tell us actually now the opposite to one of the most tangible contributions that you have made in your career that you believe had an indirect impact on your life and maybe your next life that you're very proud of? Yeah, I think one of the things I'm most proud of I'm gonna give two examples and they may surprise people because I'm not gonna talk about my technical contributions or my awards or any of that. The two things that I am perhaps most proud of is my mentoring of young people, young engineers particularly young diverse engineers and coupled with that is my work in the IEEE on diversity, inclusion and equity. I'm actually receiving, I'm very honored to receive an award from the IEEE next month to be inducted into the technical activities board Hall of Honor for the work I've done in the IEEE to foster diversity, inclusion and equity in the profession of engineering. I don't believe our profession can achieve its full potential unless we really welcome people with diverse ideas, diverse perspectives and diverse experiences and we don't, I mean, if you look at I didn't have time in my talk to talk about diversity and engineering but I'll just mention it here. If you look at the data, I mean, half of the women that join technical companies leave more than half. The percentage of women undergraduates studying engineering is less than a quarter. The percentage of women faculty in electrical engineering is about 14%. I mean, the percentage of women patent holders is 13%, the percentage of VC funding that goes to women founded companies is less than 2%. You know, I mean, if you look at the data we have not welcomed women into the profession of engineering. And that's a failure on the part of all of us, all of us engineers. And it's a failure for the profession because we are excluding women and people of color and people from other geographic regions from this amazing profession. And so I think that the fact that I've actually made a difference in the IEEE around diversity matters a lot to me. And this award that I'm receiving is one of the most meaningful. And the other word that I've received that's incredibly meaningful is the mentoring award I received from Stanford. I was the inaugural postdoc mentoring awardee. And then I also received an IEEE award from the women in communications engineering for mentoring. And to me, mentoring young people particularly diverse engineers who don't have the same cheerleaders and support as others has been incredibly rewarding. It's paying it forward. It's seeing the impact of your mentoring on these young people as they mature and thrive in the profession and then take the mentoring lessons that they learned from me and apply them in their own mentoring. So those are the two things that I would say I think my impact on the profession of engineering has been that I'm most proud of is mentoring and helping to nurture and support diversity, equity and inclusion in the profession. Wonderful, wonderful examples. Thank you so much for sharing. Let's go more into the technical side of which fields and which topics would you recommend students to study today? Yeah, I get asked that question a lot. And what I tell students is follow your passion. You might think that, okay, today there's $30 billion worth doing semiconductor technology chips and size. So I'm gonna go into the chip field. Well, who knows four years from now when you get your undergraduate degree or four years or five years from now when you get your PhD, things might change. And so really thinking about what are you passionate about? What do you love? What do you wanna spend your undergraduate years or your graduate years delving into deeply that you will find exciting? And I think if you do that then when you come out the other end first of all, you will have enjoyed that time. Secondly, you will have done good work because it's something that you love spending time on. And even if you end up getting a job or switching areas in your research you have the foundation of having done some really good work in an exciting area that will serve you very well. So I think that when I went back to grad school in 1989 the reason I wanted to work on wireless communication was I mean, 1989 the first cellular systems had only been out for a few years there was certainly no way to know that wireless was gonna be so exciting. I had worked in defense communications for the previous three years and I loved wireless communication. It was something I found completely magical that she could send data around the world or around the blog or whatever. I mean, I just could envision how exciting it would be if we could build up capabilities in wireless communication and I didn't have enough knowledge to do that and that's why I went back to grad school. So it worked out great for me. I mean, my timing was terrific but it wasn't that I said, oh, there are all these different fields I can go into, I'll go into wireless because I think that's gonna be the hottest film when I graduate. There was no way to know that. So I think that really just pursuing what you love talking to people you trust, talking to mentors, finding mentors and talking to them about different things that you're thinking about and particularly, especially for graduate students finding an advisor who is really the right person for you to work with. I like to say that graduate advisors first of all, it's the most important decision that you'll make as a graduate student. And secondly, it's a little bit like a marriage. There's a parent. So there's the relationship between the advisor and the advisor is very special. And even if I'm a very good advisor to certain students, I may not be a good advisor to other students because of my style of advising. So I think for students finding the right person to work with as your advisor, which is not just because their area is good but also because the way that they work with their students matches the way that you can thrive in that environment. I think those are really important things to think about as a graduate student. Wonderful. So starting what you feel passionate about and having the chance to have a good advisor. Absolutely. Great. And we're curious to know, how do you stay up to date with the latest advances in research in the ICT field? Is there any specific resources, journal conferences, maybe a book that you would like to recommend or communities even that you believe the young researchers should engage with? Yeah, absolutely. So it's harder to stay up to date as a dean. I have a lot less time for research than I did in the past, but I still have a research group. And to me, I will never give up doing research because it's my passion. I also love teaching. I don't unfortunately have time to teach now. So I only get my teaching fix by giving webinars like this or giving tutoring talks or talks or giving guest lectures in classes. But I think that, so one of the things that I used to stay up to date is my students and my postdocs because they're really deep in the weeds doing the research. I think that going to conferences is essential. It's really not just to listen to the talks because I think the older that you get, the less you can really sit in all day talks and get a lot out of them. But even just seeing what people are talking about going to plenary talks, talking to people in the hallways, I mean, I've been in this field long enough that when I go to conferences, I know a lot of people. And I know the people that I respect the most. And so I'll say, hey, what are you working on? You know, and hear what they're working on. Plus there's no substitute for personal interactions. I think we learned that during the pandemic. It's yes, I can go read books or journals or this kind of thing, but that's not how I really integrate knowledge. It's talking to people, listening to talks and talking to people. The journals are really where we publish our in-depth work. And journals are also, I think the peer review process is incredibly helpful. And I've learned a lot through the, you know, papers that I've gotten reviewed, even when they were rejected, you know, having a paper rejected and it happens at all career stages. I've had papers rejected, you know, recently, but you learn a lot from those rejections. And the people, the reviewers that read your work and reject it, even though, you know, your first reaction, maybe they don't know what they're doing or they had no idea what we were doing. Well, that's on the authors for not, you know, conveying it well. I think most reviewers really have good intentions. And so when you read the comments of the reviewers, it's helpful and educational. And I think that that's what the journal process is so important for. It's not only to archive the work that we do, why don't we just post our papers on the web and be done? It's because the review process, I think is very helpful in crafting journal papers and archival results in the most impactful way and the best way. Wonderful. And, you know, nowadays, everybody talks and uses chat, GPT. How will, in your opinion, these chat, GPT impact the future of research, first of all, but in general, everybody's live? Yeah, well, that's a loaded question. As you saw with my discussion of machine learning, I'm a bit of a skeptic about technology being a panacea for everything. I think chat, GPT, what's captivating about it is it's the first time, the generative AI, the fact that it can generate language in ways that sounds a lot like a human being is different, right? That's a new evolution of the tool that didn't exist before. Does that mean that it's gonna take over all of our jobs? It's gonna replace computer coders. It's gonna replace writers. It's gonna replace business people and finance people. I'm skeptical that that's gonna be the case. I think it's too early for us to really know how this very powerful tool is gonna impact our lives. I think it will, it is a very powerful tool. And I think we need to understand it better and understand how to use it. But at the end of the day, it is just a tool. And I think that tools are most impactful when they are applied to domains with people applying them that have domain knowledge. I saw that in our use of machine learning. I think that that's also gonna be true across many different professions where chat GPT and generative AI will enhance humans and enhance the tools that we already use. But I think it's really too soon to know how much it will impact different spectrum of our life from education to work to socialization. It could have a big impact. It could have a small impact. It could have a negligible impact. And I think all of those are possible in any one of those domains. And it's too soon to really say. So we should, as technologists, embrace this new tool. We should understand it. We should understand how to apply it well. We shouldn't be afraid of it. And I guess this goes back to the courage piece that we started this part of the conversation with is that, we shouldn't be afraid of a powerful tool. Every powerful tool that's come along as a technologist, if you embrace it, it ends up creating technologies that make people's lives better. And I think that will be the case for this tool as well. Wonderful. I have a last question if I may. Is there a motto or an aphorism, a book or a movie, let's say a piece of art or music that describes you or that you would like just to share with us before closing? Yeah, I mean, I thought about that question and there really isn't one specific thing, but I would just say maybe a motto or a philosophy that I have that might be helpful to close with. So the first motto I came up with was Carpe Diem, which is that you really should seize every day and live every day to its fullest. And part of that is thinking about what does success mean? And to me, success is multi-dimensional. So yes, there's professional success and I'm incredibly grateful for all the people that have helped create my own professional success, my students, my mentors, my colleagues. But I don't justify my success professionally. I have an amazing family. I'm coming up on 30 years of marriage. I have two wonderful children. I have a wonderful circle of friends and extended family. And those are just as important to me in terms of success. And so when I think about Carpe Diem or to really live every day to its fullest, it's defining success in the things that matter to you and making sure that there's a balance in your life so that all of those dimensions are fulfilled. And I think often when people look at speakers at a webinar like this, they only see the professional success. And I would say for everyone in the audience to look beyond that and think about your own lives and what you aspire to and what will make you happy. And that's what you should pursue. It sounds kind of silly to say that or tried or simplistic, but I'll just end by saying when I was a teenager, when I first started driving, I got a personalized license plate, which was I love life. And that's still my philosophy. I really do love life and I love every aspect of it. And that's why to me being successful is all dimensions of life. And so that would be my advice to people listening is live a full life and seize every moment of it. Thank you so much, really, Ian. If you wanna come back with us, thank you so much for your passion, for your enthusiasm, you've really been so inspiring. And thank you for your generosity, for sharing, really giving a personal touch to this webinar. Thank you so much, really, Ian. If you wanna say anything more, please. Also, thanks for me, Andrea. Wonderful, really. It was wonderful. Thank you. My pleasure. Nice day. Thank you very much, really. Okay, thank you so much. Thanks to all of you. Have a wonderful rest of your evening. Dave, wherever time is on your end. Thanks so much. So I'll see you, everybody, online again on the night of November with Ian and with everybody, our participants. I hope you will join us again. So thank you and bye. Thanks so much.