 Welcome to a new episode of the ITU Journal webinar series, where you can find insights and forward-looking research on future and evolving technologies. The ITU Journal is an international journal providing complete coverage of all communications and networking paradigms, free of charge for both readers and authors. This publication considers yet-to-be-published papers addressing fundamental and applied research, building bridges between disciplines, connecting theory with application, and stimulating international dialogue. Its interdisciplinary approach reflects ITU's comprehensive field of interest and explores the convergence of ICT with other disciplines. We count on your support to make this webinar an interesting experience. Please submit your questions via the Q&A channel at the bottom of your screen. All questions from the audience will be taken during the Q&A session after the talk. The meeting is being recorded and the recording will be made available on the webinar website. Closed captioning is also available for this event. You can enable this by clicking on the closed caption icon at the bottom of your screen. We hope that you will enjoy the talk and we encourage you to stay connected until the end for the wisdom corner. I will now give the floor to our master of ceremonies. Hello and welcome to the last episode of this year's series of webinars with the academics of the IT Journal on Future Inevolving Technologies. My name is Alessia Magliarditti from ITU, the International Telecommunication Union, the United Nations Specialized Agency for Information and Communication Technologies. It is my pleasure to open the webinar today with Professor Jeffrey Angus from the University of Texas at Austin. We count on your support to make this webinar an exciting experience. Please submit your questions via the Q&A channel. We will address them to our speakers during the Q&A session. After the Q&A session, as just announced by our avatar, I will moderate the wisdom corner, live life lessons. So please stay online. Professor Angus agreed to a personal chat. He will share with us some lessons learned over the years that might perhaps be useful for some of you. I'm pleased now to introduce Professor Iana Kildiz, the editor-in-chief of the ITU Journal and founder of TRUBA. Two years ago, with Professor Akildiz, we launched this new scientific journal and we are now moving towards impact term. Professor Akildiz is Ken Byers, chair professor in telecommunications and meritors at the Georgia Institute of Technology. In the last two decades, he established many research centers worldwide, including South Africa, Spain, Saudi Arabia and Finland. His editor-in-chief emeritus of impact factor journals, highly cited and at the top of the most prestigious international rankings, Iana is visiting distinguished professor in several universities around the world. His current research interests are in 60, 7G wireless communication systems, hologram communication, terahertz, molecular communication, Internet of Bion nanopings and many other subjects. So, I'm pleased to give the floor to Professor Akildiz to introduce the speaker and moderate the Q&A session. Iana, the floor is yours. Thank you, Alessia. Good morning, good afternoon, and good evening worldwide from Abu Dhabi with love. I again welcome you all to the fourth season of our ITU Journal for Future and Evolving Technologies webinar series. In the first two seasons, we had the research leaders from the academia where we had the leaders from the industry in the third season. We are fortunate to have the top scientists lined up in the season. Before I present our speaker, Professor Jeffrey Andrews, I would like to briefly talk about our journal, ITU Journal and Future and Evolving Technologies in short form FET or FET. The objective of our journal is to bring the academic and industrial worlds together in order to establish a strong bridge between the academia and industry. Our journal ideas were incubated back in December 2019 and the inaugural issue came out in December 2020. It is an open access journal, no fees for the readers, no fees for the authors. The papers go through a review process and we try to cover all forefront research activities in the world, in the world, both in the academia and industry. I would encourage you all to submit your papers and also if you have ideas for special issues, please do not hesitate to contact us. At this point, I would like to thank Cesar Onoresan, Bilal Jamusi, Erika Campagnola and in particular Alessia Mallardite for the infinite support for this journal. Today I have the great pleasure and honor to present you our today's speaker, Professor Jeffrey Andrews. First of all, I must personally thank Jeff for taking time from his very busy schedule and accepting our invitation to deliver this distinguished seminar. I met Jeff more than 15 years ago. I always follow his outstanding research and before going through his biography in detail, I must mention some preamble opinion. In my opinion, Jeff is one of the top scientists currently in our field and he made many research contributions in all wireless communications the last 15 plus years, a stellar career so far and we expect that he will continue his outstanding contributions. He contributed many pining results in wireless communication that the list is long to cover here. His total number of citations is 110,000 and his age index is 110. And currently, Professor Andrews is the structured family endowed chair professor in engineering at the University of Texas at Austin, which is one of the top universities, as you all know, where he is also director of the 6G at UT Austin or UT. He received his BS in engineering with high distinction from Harvey Mudd College and MS and PhD in electrical engineering from Stanford University. Andrews is an IEEE fellow, an ISI high-incited researcher and has been core recipient of 15 best paper awards, including 2016 IEEE Communication Society Information Theory Society Joint Paper Award, 2014 IEEE Stegonal Rice Prize, 2014 and 2018 IEEE Leonard G. Abraham Prize, 2011 and 2016 IEEE Heinrich Hertz Prize, and 2010 IEEE Comsac Best Tutorial Paper Award. And his other major awards include the 2015 Turmin Award, NSF Career Award, 2020 Gordon Lipley Memorial Teaching Award, 2021 IEEE Comsac Joe Lo Cicero Service Award, Rest in Peace Joe, he was a very nice fellow and good friend of mine from University of Illinois, Chicago, Illinois State, Illinois Institute of Technology as I can recall long time ago, IEEE Comsac Wireless Communications Technical Committee Recognition Award, and 2019 IEEE Keo Tomiasso Technical Field Award. He graduated many PhD students and they're all successful, some of them are like five of them are already IEEE Fellows, several professors at top universities in the USA, Asia, and Europe, and industry leaders on LT and 5G systems on which they collectively hold over 2,000 US patents. I again welcome Professor Andrews to our webinar and wish you all an enjoyable and productive time listening to his talk entitled Unlocking New Capacity in 6G Cellular Systems via Site Specific Machine Learning Aided Design. Again, thanks Jeff, the stage is yours. Well, thanks Ian, you think you've raised the expectations very high with that extremely generous introduction, so I will do my best here. Let's see, let's try to get this, hopefully this is looking proper now and everyone can see my slides in presentation mode. Give me just give maybe a thumbs up Ian if that looks okay or okay. So yeah, well it's thank you for tuning in, both of those of you who are live or watching this later, and you know the point of this talk is to give a bit of an introduction to some exciting research areas that my group has been doing and I've also been following other people's work and just to give you a few opinions and perhaps look to the future for 6G. So what I'll talk about today is a very short preamble on how I see machine learning fitting into wireless, obviously much has been written about this, a lot of people working on this. I'll just give a few thoughts both at the beginning and end and then most of the talk will be on three specific things we've been working on in my group the last few years and which I think holds some promise and have a common theme in that they really are about using machine learning to optimize to an unprecedented level various techniques in a cellular system where you can do things this way that you couldn't do without a machine learning based approach. So I'll talk about that with three examples and hopefully these examples will be different but also have this common thread that you'll identify. Okay, so first of all just want to talk about you know just a few questions on machine learning. So as I said you know machine learning is obviously filtering into almost every field not even just technology fields but you know education and law and everything else and so wireless is no different but you know wireless is somewhat unique in several respects in that these systems have been very carefully engineered over decades by very smart people and there's in many cases really strong theory underlying the way we do things today and so first question you know there's questions of you know all the questions you could ask about wireless at a high level first is where should we do it um and so you know you could do this in various places I have it broken into three so you can do it on the actual end device like the phone or whatever the end device is and certainly Qualcomm and other companies that you know Apple and so on Google are looking at doing this and things that you know happen on the phone obviously need to be very power efficient you know you can't do a lot of training of these things they need to be you know almost invisible to the network side so um but you know this is a you know figure taken Qualcomm and they are certainly putting like small GPU and other neural net cores into their their chip so we'll have we'll see an ever-growing capability of the the phone side itself and the second thing I'll talk about today doesn't does assume that the phone can do a small model type machine learning algorithm now you can do things at the network edge meaning the base station or things like the base station you know in a no-run architecture and you know this is going to um unleash a lot of new players um into the ecosystem um and you know I think this is where a lot of the exciting machine learning aspects can happen in wireless that are wireless specific things at the edge so things will have to run fast there they need to be scalable you need to be able to deploy them um all over the network obviously and so they can train independently and this will be the theme of my work today most of the things will be in the second category that are being done at the network edge now you can also do things in the cloud this will be the the third category of things in my third contribution we'll talk about things that would be mostly done in the cloud and this you know things would not be real-time so they would not affect things on the communication time site scale which is on the order of milliseconds or perhaps tens of milliseconds but you can do very sophisticated things back in the cloud um and uh and I see a lot of you know training of models as well as data generation and data processing network management these things could happen in the cloud and you know you have companies like nvidia that are very much betting strongly on this uh being a major aspect of telecom in the 6g era now why and how and you know should we do uh machine learning so why why should we do this you know our systems you know like 5g kind of work they work pretty well um so what do we need machine learning for um well we they could work a lot better for one thing but I think this is the we know this why question is really in the process of being answered and I think this is where particularly students and researchers really can help answer this question I mean even nvidia you know the you know one of the most powerful machine learning companies in the world is trying to answer this question and looking to academia to help answer it by finding use cases where it can really do something that's non incrementally better than uh what is done uh today by existing algorithms and also um you know how to do uh ml is important there's a gazillion different flavors of machine learning that run the gamut from you know very fancy high you know uh you know high complexity models that take tons of data to train to very lightweight things there's you know time series type approaches like reinforcement learning or you know other uh more static approaches and you know it's for wireless because these are such complicated systems with so much uh complexity um and uh you know we you know and and they need to work right if they stop working or you don't understand what they did this can be a big problem so how do you know you know take these from sort of a research domain where they show some promise into prime time meaning deployment um and to ensure robustness and be able to like you know identify what's going on the network and when to revert to safe modes meaning like how we do it today are also really important questions for getting these into the 6g uh cellular network and then finally something I thought a lot about is what do we need to do and the standards themselves to support machine learning and I'm so I'm not not really sure about this we could you know I think all three things I present today could be done without really almost any standard support very very minuscule changes to the standards if any which is interesting I think you know one interesting thing about wireless systems is they already are endowed with a great deal of measurement capability a great deal of feedback from all the devices to the to the network you know this happens today irrespective of machine learning the every phone's constantly telling the network things um about its channel quality its channel dimensionality it's you know what the channels look like um it's s in r um so you can actually do quite a bit with what's already there um so uh this I think is an open question the standards are certainly looking at this but I think the more that we can do without explicit standardization the better okay so out of all the different types of machine learning aspects the thing I'm going to focus on today and tie together is what I call site specific machine learning aid to design or site specific artificial intelligence um and what I mean by site specific is that um you know today when you deploy base stations for example in the network um they have a ton of different parameters that you can set and different ways that you can do things and they do some adaptation um some of it is in real time some of it might be over time that you know try to figure out things like how to configure the array it's very manual it's very ad hoc um and heuristic and it's not very scalable and and it's you know generally doesn't work that well so what we'll we'll show today is some examples where we can do a lot better and there's other examples in the literature this is just a small sampling but I think in a broad sense one of the areas I think machine learning can really improve on how we do things in wireless systems today is by enabling a generic network to be deployed but then each base station or each cell site configures itself to be optimal or close to optimal um for that environment and that environment includes things like the propagation environment you know so what the the structure of the environment is as well as where the uh phones or the end users tend to congregate and what they tend to do in that cell so all this can be learned in principle and if you learn it there's ways to communicate much more efficiently and that's what we'll show today and through these three examples okay so the first example to get a bit concrete um and you know this is one that I've done with my student Ethan um who just graduated and you know um you know so I'll show you how this works we when recently collaborating on extending this with Aqmadella Tebe who's um at ASU or Arizona State University and we collaborated on this with Samsung um throughout the uh the project that we were doing so um you know I'm going to assume most of the audience knows a bit about millimeter wave communication um that is you know one of the key new pillars of 5G and uh you know what promises really transformational higher data rates and so far it's not been that easy to deploy it you know they have relatively short range um you know propagation is a real problem as far as diffraction and penetration um and um and you know then you also have to put these in all the devices and so you know this is going to be a 10 or 20 year project to really get this upper spectrum really working well and uh achieving what you know it can it can possibly achieve in 5G and 6G but the basic idea in 5G is how you align these uh beams is essentially a trial and a exhaustive search type approach and it's you know you could perhaps call it a hierarchical search but basically what happens is the base station um sends out broad beams with a high spreading factor and what I'm showing here and then eventually um it you know identifies for each UE which broad beam was the best for it and then it sends narrow beams within that broad beam to try to narrow it down and then eventually after this process identifies the best beam to that UE and the UE is also trying to simultaneously identify its best beam forming vector um back to the base station and you know this has to be done for every UE so this refinement phase is on a prior per UE basis takes a lot of time because they have to actually cycle through all the UE's um UE's meaning the user equipments the phones um and so it's slow consumes a lot of power and it's one of the main bottlenecks to getting this work and working and if you want to go to even higher frequencies in 6G like terahertz or above 100 gigahertz this is even harder because the beams have to be narrow so it's a huge bottleneck big problem okay so we'd like to have uh you know improve upon this you know in a in a major way um and you know the thing is we want to try to uh one you know there's lots of approaches that have been posed for this this is you know a known problem lots of research has been done on it but you need to have all of these properties for it to really be good I mean after all the reason they're doing this exhaustive search in 5G is because it does reliably find all the UE's in the cell and you have to do something that can find all of them you can't just like do a compressed sensing approach for a one or single one of them or or you know do this on a pure UE basis okay so these are the properties we want to have I won't just read them um and but you know we'll try to achieve all of these with what I show you today so our our method follows the 5G approach in that we have a two-stage potentially a two-stage process we can also try to eliminate the second stage we send out wide beams to over the whole cell so the goal is to find all the UE's but rather than sitting out generic wide beams like what's done in 5G today where it just kind of covers the angular space with these wide beams instead you learn um a new way to send these quote-unquote wide beams um that is particularly suited to the cell site so we call this the probing code book and this is learned and this is site specific so every base station would learn you know um things like for example it's unproductive to send energy in this direction because nobody ever seems to be in this beam we're gonna send it in this other direction furiously that's what's happening but we'll do this with a much more principled approach because we'll find a optimal probing code book there'll be a small number of beams that are sent out you know often so every 10 milliseconds 100 milliseconds on this order and then all 100 milliseconds roughly that all the UE's try to listen to then we have a second stage um a second neural network at the base station that then uses feedback on these probing beams from the UE's so this is just how it works in 5G today the UE's listen to those send feedback on the signal strength of each of them and then we have another neural network that will pick synthesize a beam to that user either directly um or through a code book based approach where there might be there could be a refinement phase all right so we'll look at two different approaches they're both the code book based one which at that point is essentially it's a classification problem you're trying to classify um from the feedback what the optimal narrow beam is um and so that's a bit of an easier machine learning problem and then the second one is what we call the grid free where you're directly synthesizing an arbitrary complex valued beam and so this is in a continuous action space and so it's harder machine learning problem from a from a training point of view so I you know that's that that's how it works I've listed how it works here so I won't read the read the slide but um just to show you know for the code book based approach um it's really fairly it's fairly simple it's you know it's you know there's a few layers here um it's a it's just a standard fully connected uh multi-layer percept perceptron very simple um old school neural network architecture nothing too complicated what's what's uh I think important about this is so we train this using channel realizations which have to be gotten somehow I'll talk more about that later um but we train together these two neural networks the first being the probing beams which then result we kind of extract this w after the training process and these would be what we would actually send out every hundred milliseconds or you know whenever we send out these synchronization signals and then we jointly um so uh train it with the beam selecting selection neural network so what's what's cool about this and in this case it's with a cross entropy loss to try to identify the best beam of that code book in for the for the arbitrary beam forming vector it's a similar approach but the architecture is a bit more complicated you have to have various normalizations um and things um in conversion between amplitude phase and real and imaginary but the basic idea is the same and the key idea is that you're not training these um probing beams to maximize snr or or anything like that you're just training them such that they maximize the probability downstream of connect of uh predicting the optimal narrow beam and this is important because we think part of the reason our our technique is really excelling and we'll show this later um is that these that that um it's picking probing beams that give an optimal input when the ues feedback on them into the um and like like a spatial signature that allows you to predict a narrow beam and that's the key of the key aspect okay so for for all the results I'll show you today we're getting h from a ray traced ray tracing data but this can be done in a variety of other ways as well and it's indeed an important aspect for uh any practical deployment oh yeah so I just okay so let me just very briefly talk about the the how we um tested this so we have four different um sets of data they're all based on the wireless insight ray tracing tool at the current time we're doing um research actually right now to use a very high fidelity um simulator for this but this is you know it's a good it's commercial grade um uh ray tracing software that's you know commercially available um for this part and so we used uh some of akmed al-ateb's uh data that he released as well as one we created ourselves from the washen dc area and it's kind of standard 5g type parameters for millimeter wave looking both at outdoor as well as possibly indoor approaches um for this um and we'll see the and they all perform actually very well I'll just show here the 28 gigahertz um outdoor um uh probing beam so just to show kind of what they look like um so this above here is for um a line of sight scenario for with just six beams here or eight beams here so again these are the ones sent every you know epoch to all over the whole cell and so what you can see in the line of sight scenario um this is just for like a hundred you know like a 120 degree type sector um these they span the whole area as you'd expect for like line of sight channels um but it doesn't just break it up like the current 5g beams do um do into sectors smaller sectors but rather it sends some energy in terms of like little lobes like say this purple beam here is all one beam it sends out sharp little lobes that kind of collectively cover the space and we think it does this because this allows the ue then to feedback non-trivial um uh you know a set of uh received values on each of these beams um so that you can help pinpoint where the ue is and what um would be an optimal narrow beam to that ue now in the non-line of sight the scenario uh we have a you know a blocking object kind of in the middle and the probing codebook um learns to avoid that and instead reflect energy off of buildings over here on the on the sides to get behind it it kind of passes the smell test that is learning to do something that reaches most of the ue's in the network okay um now you can also do a uh this this same sort of uh scenario on the ue side you know we call this a sensing codebook but this you know this because each ue will have a different uh position and rotation in the cell you know we would consider these to be like generic for all ue's so this this receiver side codebook is not so important you know any any reasonable set of beams will will do equally well okay so just quickly to show some results that um for this uh dataset um so what we see is if we do an exhaustive search you have the green line and so as you expect the more searches you do the higher the s and r uh you have so like this more searches here is to the left and so like the time of searching is to the right so faster to the right is better and um but if you you know do more searches you get higher s and r what we see is our approaches which are the blue and um orange lines for you know just the single stage search where we you know immediately synthesize a beam are about two orders of magnitude faster for a given s and r than uh then just the the current 5g method so that's a really dramatic speed up um now you know this is but this is done you know for you know for a few different datasets you know lots of ue positions in the cell so this would be we we think um it's not a it is something that could be recreated in practice so you can just with just a very small number of total searches with and with without a refinement phase really reliably synthesize an aero beam so this is uh we think very promising um uh approach we just did a recent or uh further extension of this we did uh a new dataset in in boston and uh try to unify some of our results with augma dollar teams and we found that indeed you know this we have similar type we're finding similar type results even you know implementing them all together and the main thing is that we see there is a really significant um gain from doing the site specific uh probing beams um you know even if you did a machine learning like type prediction on the generic um beams you get a really uh like a 10x uh or more speed up and this is just for the transmitter side um uh from this from doing this site specific approach so we think this is a very promising approach for how to do beam alignment going forward and and using machine learning you know so the the main I think you know I list some future directions here um and I think you know we these are you know research academic type results but you know we have tried to make them as realistic as possible but we need a lot more experimentation we're currently working with nvidia we're going to be some of the I think we'll be the first academic group to use their new super high fidelity uh uh sim simulators called aerial sim and it does you know massive ray tracing over like an entire city and you can do multiple cells many ues mobility higher 5g stacks so we're we're in the process of trying to approve this out with them because they also think this is a very promising use case for machine learning um so hopefully we'll have some results on that in the next six months um to show you know in something very closely approximating a real system how you could do this and then how you would actually train this for a real system you have a couple options you could use something like aerial sim the nvidia's tool they're planning to make it so they can do this for every city like in north america for example and eventually the world where they would have a digital twin running in the cloud and you could train things like this on it that's one option in you know a few years from now sounds crazy but you could actually do an offline optimization of every base station in a provider's network and then deploy with those parameters but you still have to do some updates and these can be done by you know harnessing all the feedback you're getting continually from the ues um in normal operation so i think very very interesting to see how this could get implemented you know i'll list a few other extensions uh here um you know that you would have to be considered to go to uh uh you know a real world implementation okay so i think this is a promising use case for machine learning now let me get let me give a uh i don't know if um i should take any any questions now if there are any um on this specific please continue jeffrin okay great um okay so the second example and this is very uh recent work we haven't actually uh published or even submitted this for publication yet we'll probably submit the paper in the next week um and this is um on looking at what we wanted the main bands that's being looked at for a 6g era is kind of an intermediate band between the lower bands that are currently very widely deployed for both lte and 5g and then the millimeter wave bands that i was just talking about so because the millimeter wave bands have been hard to deploy and you know and expenses to deploy there's a lot of interest in looking at some other intermediate band where uh for 6g which we could get a bunch of bandwidth like on the order of one or two gigahertz of bandwidth um but hopefully have some propagation that would allow a less dense deployment so that this is a a study that we've undertaken with uh nokia um on this spectrum in particular on how to do mimo um in this uh in this spectrum and so you know as you just kind of mentioned this um as far as this trade-off you know you could you know get this high capacity because of the amount of spectrum but also you know we expect the channels here to offer some uh scattering and more richness than than uh higher frequency channels that would allow you know considerable mimo multis or mimo to be done you know the the 10 the 10 arrays will be large but not nearly as large as millimeter wave so ideas that you could do a quite a bit more digital beamforming um in these uh in this spectrum and so um we're trying to you know you know we know a lot about how to do mimo optimally but one of the really hard things in this sort of regime where you have so many antennas um you know maybe on the order of 256 at the base station maybe 16 at the ue is estimating this channel is is really challenging you know for all the different ues and in all the different sub bands and so you want to do this with as little um explicit pilot tones or feedback as possible um and so that's what we're kind of looking at here so the the basic architecture is we have we'll have a neural network a large neural network at the base station that's attempting to learn um from ue uh feedback um directly how to do precoding to those ues so the basic idea is that this it's this closed loop system this end to end deep learning system where the ues have a small neural network where they are are listening to you know in the real in the real system how it would work is they're listening to downlink pilots broadcast from the base station to estimate a channel that estimate their downlink channel which is you know a bit smaller dimensional uh well i mean they have a smaller received array so it's easier for them to then estimate they attempt to then compress this from the pilots into some uplink signal that they send back through the same channel since we're assuming a reciprocal tdd uh band here to the base station the base station would get this collection of uplink feedback signals from the ues and then from that directly try to do an implicit channel estimation and precoding so that's that's the setup here so it's kind of putting deep learning at the heart of how we would do a MIMO we're assuming you know for the purposes of this talk that this is narrow band so this means you we're you know using OFDM or some similar uh type of uh approach for handling the multi path and the inter-symbol interference but can we you know do a lot of the MIMO processing with um neural networks and and the nice thing here is we have a really we have really good baselines because we know what's optimum you know from information theory um and there's been some recent work also um on kind of kind of in a similar vein from a way use group and and some others in korea i'm looking at how to do you know a high-dimensional multi-dimensional MIMO system with uh feedback and precoding but ours is quite different in the sense that um we are you know doing tdd and also with multiple antennas at the ue which really makes the problem actually much harder than with a single antenna because it's a it's a full MIMO channel to each ue okay so um just to show kind of uh you know let's start with a very simple result so you know if we had single user MIMO um in this regime um with just two data streams going over this channel um we're using ray tracing data mostly here from from the nokia is produced um and we're trying to do this with uh you know not too many pilots so here just two main pilot symbols pilot like matrix uh by symbols i mean a whole vector of of symbols across the uh antennas um and so you know we have a true upper bound which would be you know if we know the true channel we could do a singular value decomposition um with water filling and so that gives us a true upper bound and we can see our proposed method gets close to that you could also get close to that with using um um a um mmsc type channel estimate um followed by svd beamforming and if you do you know sort of an orthogonal um pilot matrix um you know rather than you know trying to do an optimal pilot symbols that you know know you know have are sort of geneated by knowing the true channel um you have a little gap here but it's not too bad if a few db what where our technique really shines even in this simple single user case is where if you only send you know an insufficient number of pilots back to the base station so rather than sending the two you need for a rank two channel you just send one so here the the mmsc channel estimate won't do so well um and so you have a bit quite a bit of a gap when you do the um the svd based on it and so what our technique um you know still um hugs pretty close to the baseline it's able to you know somehow do a projection based on you know you know because it's not doing an explicit channel estimation it's just implicitly using the the the compressed uh pilot symbols by the by the ue side to determine the down precoder it can still track quite close to the upper bound which is interesting um now if you did this with instead a um just a statistical channel um the same thing doesn't hold and that makes sense because what you know what's what's happening here and where we're able to get so close is that we're learning the environment um of the um of the cell so this is the kind of site specific aspect you know there's ray tracing the ray tracing uses the structure of the cell and so you're able to learn this structure um eventually in the neural net and so when you see even just a sparse amount of pilots you're able to have a much better idea of what that what the channel was um you know but if you go to a uh statistical channel which you know it's stochastic and so you there's not really any structure to learn then you know there's not much gain from doing learning in that uh in that case so um that's I think it's intuitive but you know we observe that um now if we move on to multi-user MIMO which is you know the um the mechanism um where we'll really get you know major capacity in these uh mid-bands because you can send to multiple users uh uh simultaneously even if they each individually have low-ranked channels so here we're sending um you know two uh streams to each uh to each user um and in the first observation we make is if we you know just extend our technique that I just showed you to multi-user MIMO with feedforward neural networks it does okay um you know it's it's it's able to do better than match filtering which is you know just optimizing you know just matching to the the channel which is like beamforming but it's pretty far away from um block diagonalization which is not quite the optimum but it's the more or less the um this is block diagonalization with the true channel so it's very close to it's basically essentially the achievable optimum um that doesn't use a really crazy uh precoder like dirty paper coding or a successive interference cancellation um okay so that's it's so you know there's a bit of a gap here so the question is you know how can we close this gap and what we realized was that you need to have some mechanism um for the the precoder to learn a channel inversion so um what block diagonalization is doing is you know it's it's a it's a per user channel channel inversion essentially that's then orthogonalizing the users um and the feedforward neural net on its own is not capable of learning this infrastructure so if you have a trainable infrastructure that can learn weights then this really uh is is is the ticket and so this kind of you know shows that you you know to do machine learning and wireless you need to understand the theory you need to understand what optimal structures are likely to look like and then if you put those structures in then you can learn the the weights that you need that are tuned to that specific environment so um in that case then we do get very close to the upper bound with our approach um and you know there's quite a bit of a gap um to you know not uh you know with with without a learnable pseudo inverse type structure or with um uh you know kind of the uh just a normal channel estimate followed by block diagonalization again has a quite a big gap that's the green curve down here so deep learning does seem like a good approach you know for the you know again you need to have some reason to do the deep learning and and here the reason is that you don't want to send pilots to fully learn the channel and but you can learn enough about the channel from you know the you know from the uh historical data that you with a deep learning approach you don't need as much pilot information instantaneously all the time you just can have a little bit of information and then the neural network sort of stores the overall structure okay and this and this is you know why people are looking at uh you know for for big MIMO channels you you know there's a variety of ways that people proposed um to do uh channel estimation more efficiently than linear methods okay so the last thing i'm going to talk about today um so the second one we will have a journal paper with more results coming out the next week or two we'll put on archive so if you're curious about that please look for that and the the last one is it shift gears um pretty um to a kind of a different uh approach this is you know we're going to be more above the physical layer and we're going to look at a whole network and specifically this we're looking at say a deployment in a city um and how do we optimize the overall base station parameters throughout the whole city so that they work together very well okay and this is done with AT&T labs um so you know I know you're in Europe so AT&T is the largest um or one of the two largest uh providers of cellular service in in in the United States and one of the largest in the world and their labs happen to be in Austin which is great so we could we can collaborate with them so easily um so we used their in-house uh tools uh to help with this work um and to explain a bit more about it so um and we have some recent papers uh on this and we're still working on some additional things I'll give a preview of today so what I'll mainly talk about today is a very simple problem very simple to understand but very hard to solve okay so the problem is you know each base station um you know this is deployed in a cellular network you know you have these antenna panels and then you you have to figure out how to have all the antenna elements on them and this is just you think of this just for a single for a SISO situation just for sectorization you know you have these sectors in the antennas and you want them to cover a certain amount of space both angularly um in the vertical in the horizontal direction how wide do you want the sector to be um and then how vertically compressed you want it to be the more vertically compressed it is the more um intended gain you get but then of course you lose you know uh you know some dynamic range you know as far as covering um a wider range in the vertical direction and then you also have down tilt which is you know how the the physical direction that this sector is pointed in and these three things actually have a very large effect on the coverage region of the cell and we're you know the signal string throughout the cell okay so I've shown this with this this angle and so what you'd like to do is deploy and and what makes this really tricky is that how you deploy this in a given sector then affects the interference in all the neighboring sectors specifically you know the the nearby uh cell sites and so what you'd like to do is maximize some function um of both the downlink um SNRs and the uplink SNRs in your cell and in fact over the entire network so you want to optimize you know this function could be so how will this optimization problem we'll look at we'll look at both the downlink and the uplink and this is actually new existing work mostly just looks at downlink optimization um we'll show that that's important to look at both and then we'll look at optimizing two things simultaneously one will be coverage which we quantify as you know the fraction of users above a SNR threshold that's just what this little math shows here as well as the logarithm of the sum rate um or rather the the sum of the logarithm of the rate and the reason we do this is this kind of a fairer metric rather than you don't want to maximize some rate this will just bias you to like really maximizing the rate of the nearby users rather than the edge user so this is like a fair cell wide fairness type uh metric and we'll see it does well over the whole distribution of SNR not just the the ones that maximize some rate okay so this is what we want to do now why why is it still a research problem in 2023 the reason is you can't actually solve this problem um it's it's you know it's non-convex it's combinatoric even for a moderate size network like 20 cells um you can't it's it's like an infinite number of settings you could have for all these different angles and degrees even if you quantize it just a map you can't even do an exhaustive search so how people do it now like in 3gbp they run a exhaustive search over a moderately sized network where they constrain every base station to have the exact same setting um which is obviously not optimal and then they find the best one where they all have the same setting and then that's the suggested default settings so that's what we'll use as our baseline today is the the cellular industry suggested settings you know and then because these settings aren't that great there's a lot of hand-tuning so when operators deploy these they hand-tune them they're like oh this is not working very well we hand-tune it that affects the neighbor it's it's a huge problem it doesn't it doesn't work at all well this is something really right for a data-driven approach okay um so um we tried a bunch of different things actually we you know when i first saw this problem i'm like oh this is you know a reinforcement learning problem you know each base station is an agent they take feedback from the users they see what happened they they take some new action they observe what happened they explore this space this this doesn't this doesn't work well for a few reasons one it's really unstable because what you're doing in one cell affects the other cells and and at least for us we could not get the reinforcement learning uh algorithms to converge to a good solution it's very unstable you know we tried various multi-agent reinforcement learning approaches we talked to reinforcement learning agent faculty and experts anyway so we scratched that after a while um it could still be a good thing to look at in the future and we tried a few different things basing optimization and so on and we settled on this approach which i'll just really briefly summarize it's like a basing optimization combined combined with a a genetic algorithm for for evolving known decent settings to other possibly good settings okay so it's it's just an optimization algorithm for essentially finding predicting what good settings will be so we don't have to try everything because we can't try everything there's too many settings so um it kind of harnesses the best of both um basing optimization using this Gaussian process regression as well as this genetic algorithm for trying to um uh mutate um and then you know pick possibly good new settings to try and then we we have a you know so this model then kind of essentially weeds out what might be good settings and then we actually test what we think might be good settings on AT&T's network-wide simulator so this is their in-house simulator to many engineers years to develop something we could never have developed in an academic setting and so we use their you know their 5g simulator it simulates a large network all the aspects of the stack all the aspects of the network is what they actually use for their own tuning and it takes like you know an hour to run like 10 milliseconds of actual network simulation so that's why we have to be uh judicious on which even for the simulation we have to be judicious on which set how many settings we try uh to to use and so this kind of you know very cool that we were able to use this it's you know for us about as close to a real network as we could ever hope to have access to it you know and to be able to do this kind of experimentation okay so this is a picture of like one of the scenarios we looked at where these this is just a snapshot of macro cells and small cells and of ues the ues can be dropped and you know randomly and we we you know averages over many different ue drops for this network um and you know we have some you know baselines here we we uh we also collaborated with uh facebook um and ryan drifvers and robert heath they've been working on this with both basing optimization as well as a reinforcement learning approach um now they can only do that for a very small network because it for reasons i'll explain because this is basically the reason as you get to a larger network the basing optimization as a cubic complexity with each iteration because it has to stores all the previous history it does like a matrix inversion which is cubic that's at a high level why it's cubic whereas our approach has linear time complexity is not in um uh does not increase with time it's just it's exactly linear okay so um as far as um actual performance this just shows the downlink you can see that it's a quite a large improvement um over uh three gpp baseline or or or um or the basing optimization you know for uh that you know that that took so long um and it's and the improvements over the entire distribution you know from edge users down here to the even the best users and especially the median users they all get a few db uh gain um from this um now we also show that it's really important to do both uplink and downlink jointly um and which is not typically done they typically just look at downlink and this is actually a really bad because the uplink winds up often being the coverage limiting uh direction in the network because the ue doesn't have a lot of transit power and we show massive uh increase of like 10 db um in coverage here um versus just doing a downlink only optimization in the uplink it's it's it's terrible you lose a huge amount if you just do a downlink optimization whereas if you do them both jointly you only lose a very very small amount um from the joint optimization in each direction so you know even if you just perfectly up optimize the uplink or the downlink doing it jointly you just have a very small degradation in each direction so it's like you get the best of both worlds uh from a from a joint optimization so that's I think one important finding in this uh in this work um you know some you know recent work we've been undertaking we've been you know extending this to like more complex scenarios with you know where we can actually change which direction we point the sectors in not how not just the shape of the beams but the direction of the beams also trying to do load balancing to force traffic onto small cells or millimeter wave cells um and and also a thing we've been looking at is how to protect radars so for example in these midbands I talked about earlier the 7 to 15 gigahertz or even in the upper sea bands there's um there's radars existing um you know US military radars other military radars and you need to um you can learn where those are without too much trouble because they're transmitting out radar signals but then you need to protect them so can you you can actually harness this exact same framework to protect an incumbent a known incumbent so like here are radars we show you know how you could um so we would now not only want to optimize coverage capacity to our users but also minimize interference cost to some military radar say it's on a tower somewhere okay and we show that um this this really works well so we can still get um you know our uh our our improve our uh capacity here is the middle plot improves the decrease our outage so we get almost the same type of you know behavior without considering the radar and we can lower the radar interference by about 20 db which is a huge amount um in in doing so um so uh you know that's I think it's again this is just kind of a proof of concept but we actually presented this uh this was presented to the upper brass at AT&T because this is a big problem they need to be able to promise um to the US government that they're not going to knock out these military radars if they deploy cellular networks in the spectrum and so this is one technique that can be used to really um you know this attempt to accomplish that um okay so just I'll just have a few parting remarks before we go to uh Q&A and these are just sort of observations you know I've been I really switched from doing mostly theory uh stochastic geometry and information theory type research about six years ago to doing machine learning just kind of wanted to do something new and and learn more about these tools and so there's just a few things that we've observed from our own research and from others is you know as far as doing things at the link level it is quite interesting that you can take out you know all these things that we've been learning on at learning since Shannon and working on and you can throw neural networks in there instead and if you train them right they can do about as well it's very interesting but it's not perhaps I don't think anyone's going to do that because the way the way we have like things like modulation coding um OFDM these have been not only optimized in theory and are very close to optimum if you know it you know from a pure theoretical level on a link layer it's very hard to beat what we're doing currently you know at all period and then also on a on an implementation level I mean you know companies like Qualcomm and the base station vendors I mean they have this down to like science you know I mean these these you know these are squeezed into tiny little cores that consume like consume almost no power so ripping these out and doing it with machine learning uh people are talking about that but I just don't see that happening okay where I think machine learning will um really be uh powerful is the stuff I'm showing today where you have a system level approach where you don't know what's optimum like the last one or you can't do what's optimum you know like in the beam alignment case it just takes forever to find the optimum beam or you know you know you get tons requires a ton of channel estimation where you'd be sending almost entirely pilot symbols to do optimal so um so this is where we think and I think we can really get a gain from a data driven approach and the the basic idea is rather than doing things from scratch like they're done really in 5G today almost every 10 milliseconds you almost completely start over instead you build up knowledge of what's happening and then so you you can make decisions that are you know adapted in a much more uh comprehensive way and that's where the gains I've shown today are from one thing I thought is interesting is you know from theoretical research back in you know when you know when most you know a lot of wireless ideas can be advanced with theory you know you'd have some cute idea you'd show the um you know theoretical gain from it maybe it's 3x and then you know you simulate it maybe it's 2x and then you know you go to implement a real system and you hope it's even like a 20% gain and in machine learning it's actually the opposite so the the more realistic you make the scenario the more gain you'll see from a machine learning technique so we saw that for example doing like an IID Rayleigh channel which is great for theory it's proving it's useless for machine learning because there's nothing to learn you need that structure you need that complexity and that's where machine learning can really get in there and do things um that that provide non-incremental gains so this means that you know for you know future like 10 years 20 years of wireless we need to work you know with really high quality simulators high fidelity simulators that means likely working even more closely with industry to test out our ideas because this will you know not only make them more practical but also that's how you show the gains you show the gains by having these you know high fidelity systems that you know that um where you know there's there's a lot of gap between you know a simple approach and a sophisticated approach also I think you know it's encouraging and that a lot of things we can do don't require really heavy ML they can be done at the network edge you know returning to the beginning of the talk they can be done at the UE you know we'll still do stuff in the cloud you can use we can use heavy machine learning things like large language models or transformers deep generative models for maybe data synthesis generation fault detection things like this but for the the real time stuff that's being done constantly at the network edge and on the UE this can be pretty simple so they don't need to burn a lot of power or you know be really hard to train necessarily so that's it that's that's um you know all um I have uh today so I'm happy to take any questions thanks a lot Jeff uh excellent talk there are many questions and I will start to read them to you you can also see but there is no order by the way so first Ruben or Ray Reuben or Ruben is asking in the example one what is the time scale at which the machine learning algorithm learns the directions in which to do the exhaustive search great great question Ruben yeah so um I assume what you mean here is the probing beams that we're sending out you know the whole cell you know periodically um and so the time scale we're imagining it's quite slow so we're you know the simplest case you could just learn a single set of probing beams um and that would be averaged over all the UE locations in the cell so it could be over days and you would then you know because the main goal of these probing beams is just to cover the places that UEs tend to be and um uh you know with more with more energy and to avoid places where they are so you avoid blocking objects you avoid lakes you avoid you know things like that and so this doesn't have to be done on a very fast time scale so I think that's very important you can then you could consider you know do you know having a couple different settings you could have like a morning setting an afternoon setting um you know if the UE distributions are different and you'd get some gain that way but for our baseline case we think this could be done with just even a single set you know single averaged over all the all the day and you know even over weeks so very slow time scale is uh is is the idea also Ruben is asking in example three are there any tunable parameters that you need to set the trailer between optimizing the uplink and downlink yeah I mean we're we're we're trying you know tuning the exact same parameters you know actually um you know I have a longer version of this talk where we give like you know um some examples of like uplink settings and downlink settings and how the joint ones differ um but you know it unfortunately it's not super explainable um we don't know uh you know we did have a parameter here that sets the the importance between the uplink and downlink which we just set to one half but you could you know set it to you know more devalue the downlink or the uplink more um so we had that it was a it was a setable parameter um but uh yeah we're not exactly sure why you know what it's doing this you know that's so different than a downlink optimization but there seems to be some very important differences between the joint optimization and just a downlink only or uplink only and there is also a question by Pedram uh he's asking did evolutionary algorithms use in cellular networks so far in practice if yes for what applications and how they handle the time consuming aspect of the algorithm yeah good question I I actually don't know the answer to this question you know unfortunately companies don't tell us what algorithms they're using so certainly nothing in the standards that you know has evolutionary type approach were there any any any companies using it I I I don't know unfortunately but I don't yeah I don't I don't know of any example and there is a interesting question I mean it's not a question but he creates a scenario Pavel Kulakovsky he says about beam alignment problem I had similar questions in the classrooms really and now why don't we use the following very simple approach let the MIMO array at the base station to measure the transmittance of the training signal current coming from the user in the uplink channel then let us use the complex conjugate coefficients maximum ratio combining approach for the signal transmitted to the user in the downlink channel and a perfect beam forming is this just because we have FTD everywhere there yeah good question yeah no we have we have TDD let's assume TDD let's assume reciprocity but the the problem is that the UE doesn't if it it's like a chicken and an egg problem so you know because of each of these antenna elements at the base station is so small and at the UE is so small the UE just sends a pilot you know it has to send a beam form pilot and it doesn't even know which direction to send it in so if it got lucky and it sent it toward the base station then the base station could listen to this that's true and then beam form you know if it was able to do an estimate it could then beam form back into the same direction but that's there's insufficient gain on the UE side to do that with any reliability at all so you know so that's why you have to do this probing approach where you send down a a lightly beam form signal and you do it with a lot of spreading gain I didn't mention that very much but that's also important for these wide beams they can't even hear these wide beams because they're too faint they only hear them because they then accumulate them over you know multiple symbols so they use like a ZF2 sequence or a similar type orthogonal sequence that's that's long and then you accumulate that and then you have enough energy just to give a reading so it's it's that that's the reason is it it's it's a chicken and egg problem you you can't actually even hear each other until you have some beam alive thank you Jeff there is another question about Amir Shuhail or Shuhail or Shuhail digital difference digital digital twin simulation with NDVI will be first of its kind of testing okay so you know to the extent I understand the question I think um it it will be more or less the first of its kind now I don't know what every other company's doing I know um you know Qualcomm and others are working on very high fidelity simulators but NVIDIA's you know vision is that you know they'll have they haven't released this yet to the public but um they'll have you know this essentially digital twin of a wireless network that you know will run in the cloud with you know a kind of unthinkable amount of you know complexity today and but it'll run in near real time when they have it all tuned and optimized and running on their big GPU servers so that that is the idea and I think work you know my my students and I are kind of the guinea pig right now working with them and their engineers to try to kick the wheels on this from an academic point of view I think they're also working with a few companies select companies on this um you know like the Nokia's or the Ericsson's I know they're working with Ericsson on this but yeah so we're kind of trying to do an academic proof of concept of our beam alignment approach with them there is also a question this is a broad question by the way from Amir uh he's asking about the 6g core versus the 5g core and uh I'm please go ahead before I say something okay go ahead unfortunately I'm not a much of a core expert network obviously there is a lot of evolution going on in the in the core um you know but I I I'm not the right person to answer I can I can maybe okay yeah but you know definitely 6g core will be this similar to 5g it will be heavily based on stn software defined networking and network function virtualization and much more usage of automatic network slicing techniques so the so these are the major issues uh and then plus you will have these uh cell-free massive MIMA technologies heavily used on also in 6g and the RAN on the RAN side so core is as I told you it will be heavily based on stn NFE so I have another question uh I think I I covered all the other questions from outside so uh you know you presented your results mostly for the lower frequency bands you know mostly like for 5g and so when we go to the 6g as you know that we will be talking about subterrahertz and even you know hopefully terrahertz bands and so can we use your solutions ideas straightforward or there will be new challenges when we go to higher frequency bands yeah well I mean everything's going to be harder you know in those in those bands as far as you know the beam alignment problem right because you need you know much bigger arrays even more directional beams so I think you know I don't see any reason my first contribution couldn't be scaled up to that scenario I think you'll need to do you'll even the needle be even greater at the terrahertz bands because the beam alignment problem will be even harder so I think yeah because there will be very single pencil sharp beams you know so it should be very very challenging right yeah so I think you'll need something like the first contribution um I don't know I don't think my second the the latter two ones I talked about would be very applicable to above 100 gigahertz okay so thank you Jeff and I assume there are no other questions here oh Reuben again jumped in he cannot let it go no no that's a good question actually so this is a common question so let me let me answer Reuben's question it's basically on the time scale and on mobility and the way for again for the beam alignment one the way it should work is that you should train it over so so many UE positions that mobility doesn't matter so if you know a UE moves they're still just in another position that you've already trained the probing beams for so that they would just have different feedback on the same probing beams so you don't need to adapt this on the time scale mobility at all right if you had to do that it wouldn't work I don't know that the other participants can read the Q&A here the question from Reuben was about the example one again the time scale can be slow or as I understand the algorithm is primarily learning the geometry of the environment around it is the same approach applicable for faster time scale adaptation for example for the environment with moving objects that influence the optimal beam search directions and again thanks to Jeff for the insightful talk so Jeff answered that right so there are no more questions and I greatly appreciate your great talk and also the running the Q&A session very smoothly thanks a lot to Jeff again and I asked my colleague Alessia to take over and ask you life lesson questions thank you Jeff see you all right now for the hard questions thank you so much Ian for moderating this session and thank you Professor Andrews for your very insightful talk so now it's time to move to the Wisdom Corner live life lessons which is based upon the idea to give a unique and special angle to this series of webinars adding a personal touch so I would start with my first question which is your hard-earned life lessons or failure that you would like to share with us today that might perhaps be helpful for someone attending this webinar wow you know just for just everyone knows I did not get these questions ahead of time so these are all spontaneous yeah well you know I think if I just really think about that I guess I never really saw myself winding up in this position to be honest you know as a professor at a top university where people would care about my research you know I think at each stage of my student career I felt it was a struggle you know and you know I had to kind of you know in many stages even going back to when I was a teenager you know find like a new level by usually by you know working hard or you know finding a mentor who could help me you know understand how to study better or learn better and you know really just learning how to focus my energy and my concentration I think that's one of the hardest things we all face and me certainly to this day even is just my biggest barrier is my own lack of ability to focus and concentrate you know so many distractions in life and so I think and even more so now than when I was a child so I guess for the folks here I would say just you know learn learn something that works for you that allows you to focus and concentrate you get away from the the the busyness and clutter and distractions of the world and and that's what I think you know I was able to do at several different stages of my career we're honestly I thought I wasn't going to do very well in college I almost like thought I was going to fail out and then I wanted being top of my class I got to Stanford I'm like everyone here is so smart how am I ever going to like you know get into a you know you know do well and you know my first year was very tough and I got hammered by all the you know on all the exams and then I wound up you know getting straight A's my last four years and doing a good PhD and so I think you know I guess just that sort of perseverance and resilience and not giving up is something I think it's served me well and it'll give me now when things are really hard I just kind of have that belief that I can just I just suck it up and do what I need to do it'll be okay great thank you my second question would be could you tell us one of the most tangible contributions that you have made in your career that had a direct impact on your life professional or personal and maybe on other slides that you are most proud of wow yeah I mean I guess to kind of break this into two categories and you know one is on you know actual technical contributions I made you know with my mind or working closely with my students and then our others are more on you know things I think that I've done that you know serve a broader community or maybe you're different I don't and the first one the answer is for me not so hard because um you know I spent like about 10 years working on a new area at the time called stochastic geometry that no one really seemed to appreciate or understand when I started working on it but I could see that it was going to be important and eventually I you know I made you know a couple contributions I think to the to the communications field using that that tool set that I that I think were you know have stood the test of time and were were fairly important but I think maybe you're looking for a little softer answer and I mean I I don't know how to give a really specific answer I mean I I look you know Ian touched a little bit on it introducing me but you know I'm really proud of my students you know um when I first started out as a professor I was 28 years old and I was often mistaken for a student myself and so I you know for you for many years I really felt like an imposter honestly you know with this uh professor title in front of my name and you know here I was working with students were in many cases almost my age you know they were mid late 20s um some of my early PhD students were at like Juan Choi who's a professor now at Seoul National were older than me and um so I think just working with these students and and just treating them with respect and um and you know just trying to just nurture them with my own curiosity um and passion and not really pretending like I knew everything I think was was was a was an approach that worked well for me and for them and I think you know that a lot of them have gone on to be wildly successful and you know really leaders um you know both in academia and then also um you know several that are you know working for major companies and have made really important contributions to LT in 5G and I and I guess yeah I just you know they're you know I just I don't know what the common thread was but I think just um that kind of curiosity and just really trying to like understand things rather than just do things just trying to like look below the surface and understand and I think that's um it's served them well in their career and also I think you know I still have a good relationship with almost all of them because of that great thank you and in which fields and topics uh you would recommend students to study today yeah well I guess you know limiting this to my you know broadly speaking the telecommunications um area um yeah I think um you know something we really you know failed at is you know providing global coverage uh you know particularly to you know the under connected it's very expensive to to provide 5G coverage um to rural areas or less developed areas so um you know I think things that bridge that gap or something that you know either bring down the cost radically of the terrestrial approaches or you know um these leo approaches which are being done by SpaceX and Amazon and others I think are very exciting um for the next topics for the next 10 to 20 years um and I think obviously I think machine learning is interesting and so I think you know this is machine learning obviously is a is a blowing up uh it was a really perhaps overhyped area with attracting huge amounts of talent but I think if you can learn machine learning along with communications or networking and understand both there's not so many people that understand both and you have to understand both to be able to do the kind of things that I was talking about my talk today so I think that's another good good good area um you know I think but I think there's you know lots of interesting things um as well like the uh some of the things Ian's working on for example in terahertz or in molecular and biological communication also quite interesting I just don't know as much about those all right uh could you tell us um um actually how will in your opinion generative AI like chat tpt the new gemini impact the future of research and also everyone's life in general yeah I mean this is I think something we're we're all thinking about whether you know you're a high school teacher or a college professor or a researcher um I think we're all trying to figure it figure this out um and discuss it a lot um it's gonna be very interesting to see how it all shakes out I mean obviously it can it's a very powerful and disruptive tool and it's you know and we're just probably at the beginning of what it can do so I think you know students today need to learn how to use this tool just like they learned how to use their scientific calculators and you know 40 years ago and learn how to use matlab you know 20 years ago or um it's a it's a new tool that can that can increase an engineer's productivity certainly I don't use it that much yet um I'm I'm but I think I it's one of my like on my list of kind of my homework assignments for like when I have some free time is playing with it more you know it certainly can be used to write code um polish writing so um I'm still you know I'm I'm still a really not very good at using it but it does seem that it'll have an impact and I I hope we can just use it in a way that improves and amplifies innovation rather than you know stifling or replacing human ingenuity because it doesn't seem like it will be that good at those those things okay thanks actually my last question now I would like to ask you if there's a there's a motto an aphorism a book a movie or a piece of art or music that describes you best that you would like to share with us before closing this webinar today wow that's uh that's a very interesting question Ian please join us turn on your camera I mean yeah I got yeah I don't know I don't know how to give that an answer like uh to that one that doesn't sound intensely narcissistic um something you like maybe can I uh maybe ask the question because this happens to me so when you are sad which music are you listening to and when you are joyful which music for example when I needed energy I listened to techno music when I want to be calm I listened to the classical something like that I don't know or yeah that's actually true I you know a little known fact about me is I was a uh I was a techno DJ in the 90s so it's like kind of a very early in the like personal okay so yeah so I still like you know that you know good electronic music definitely connects me with some part of my core you know and and you know I really I really you know when it's done well I really think it's you know still kind of uh fundamental and interesting and um so I guess yeah maybe I would say um a song that I really connect with and maybe describe a little bit of my spirit is a song called Belfast it's by orbital yeah really techno techno artists and it kind of gives us certain hopefulness and an arc of like um joyfulness that I really relate to strongly true I heard this from another friend of mine yeah true all right yeah the techno was the best in the 90s I still have those CDs by the way anyhow okay thank you thank you so much for for being generous for sharing personal uh advices and experiences and even music thanks to Jan Jan sport so thank you so Jan we conclude our series for this year yes thank you Alessia and thanks a lot to Jeff really it was a pleasure to listening to you and talking to you and hopefully we'll see each other again somewhere right so the conference are not attractive anymore really I don't know it looks like you're also at home right I'll make it I make an effort to reach out sounds like you you move around more than me so hopefully I can bump into you uh someplace uh you are in we'll invite you to Abu Dhabi for one of these events that that would be great I would love to try to make that if I could yeah sure okay thanks for the audience we we will continue next season starting in January we have all again lined up many top scientists so we look forward to launching another season so thank you again wish you all happy holidays and a super 2024 thank you thank you thank you everybody bye bye thanks Alessia thanks Ian thanks everyone thank you thank you