 So, I would right away go into the presentation because we don't have much time. It is titled, Finite Element Simulation Analysis with SyLab. The idea is, as we have all seen from the last, yesterday as well as this, SyLab is a general purpose computational software. We can treat it this way and among those areas where computation is heavily involved, the whole, we have a whole spectrum of applications on some basic computational applications. Finite Element Method is one such. So proceeding with that, let's just have a brief overview of engineering, disciplines and Finite Element. We have a, those applications of Finite Element can be classified on the basis of the nature of problems, I mean the physics of the problem. So we have electrical and electromagnetic applications which is of a relatively recent application area compared to civil and mechanical. But this thing also has picked up, it's about 20 years almost that people have been using heavily finite element in electrical and electromagnetic applications. Then we have mechanical, thermal, fluid, chemical and coupled. By coupled, we mean multiphysics systems that tend to combine all these physical phenomena. We'll have a look at that a bit more detail. The other issues involved is how to model this. So we have approaches that talk of, that just look at them as ODE's and we take the analytical approach. Then we have the numerical computational which is obviously the choice in many practical cases because of the geometries, we can't really write down analytical solutions. It's really difficult. So it's a, it doesn't need much elaboration that numerical and computational methods like FEM, FDM and boundary element method, these are the methods of choice of them when finite element tends to be preferred by many because of its flexibility with the geometries. Finite defense tends to constraint us on the, some nice kind of geometries, but that is opened up by finite element method. And now it is, we do have quite a significant range of finite element software available, which testifies the fact of its popularity. But then the issue is which is common to all these FEM, FDM and BEM, not so much in BEM, but the other first two, that we have a huge size of the model, the mathematical model that we have ends up having lacks of variables. So how to reduce the computational cost involved that come just because of the size of that problem. So model election is one area which, which I tend to classify within the mathematical modeling part. And then of course there is the application areas, which are analysis, design, optimization and control of such systems. By such systems I mean the common thing that runs through all these systems is they are governed by partial differential equations. They are not OD systems, they are governed by partial differential equations. So even classical control theory cannot be readily applied to these cases. We need to either find out control theory or develop control theory on the basis of PDEs rather than transfer space or transfer function models or else we have to do something. And then we have optimization, design optimization, which all of these analysis, design, optimization and to some extent control as well involve repetitive finite element computations. It's not just that once we have to solve a system of one lag variables, we have to do that repetitively and maybe dozens of times. So that's one of the background of this thing, but focusing on the electromagnetic applications in science on the electromagnetic department, we have magnetic bearing, levitation, all these application areas as you see, medicare and braking, damping, forming processes are electromechanical form where we form systems, where we form devices or components through electromagnetic forces, not by hammering or forging. And then we have induction heating. So all these applications are industrially sort of hot areas. They are industrially crucial areas, all of which require the finite element method to some extent. And then we have multifigist problem which just shows a schematic of how these figures may be intertwined in realistic systems. So we can't really compartmentalize things that I'm talking of only mechanical systems and talking about electrical systems. Whenever we really try to do a realistic analysis or simulation of any system, we have to encounter things like this, that we have all sorts of physical phenomena interacting with each other. And the simulation also has to take into account this coupling in real time. In some cases it may be possible to decouple solving an electromagnetic system first and then a thermal system in an input-output kind of arrangement that the output of the electromagnetic thing goes in as the input to the heat module. But often that's not always possible because the output also affects the input. In case we have motion, the motion itself feels the, I mean, affects the electromagnetic field. So we can't really decouple. So that doubles the size of my system and brings in additional issues like because we have different time constants. So time integration of those systems are, one system is very fast, another system is very slow. So these things all crop up. We have certain issues here which I just thought I just wanted to mention. And general computational problems, I would like to draw your attention to this one. We have analysis of solutions with respect to parameters. Normally what happens is people have been using finite element for just getting the solutions. But what we really often need is we need to study the solutions behavior. It's the way it gets affected by certain parameters of the problem. That's what really we do in designing and optimization. We change certain problem parameters and again and again, repetitively we just compute the solution and didn't see how it affects. So the real purpose is not just getting the solutions, but to analyze the solutions with respect to certain parameters. Which of course, the design and optimization and inverse problems in general, in a more generic sense, we have these problems. Then the first, the big question for us here is why use SILAM? We have a host of good, highly efficient and highly well packaged software. But then the factor is again, they are highly costly. Just because and much of that cost, the thing to be noted, much of that cost is not due to the computational part. The computational part is relatively easy and implementable with any general purpose computational software. That's the whole idea of this presentation. It is the packaging. It's the pre-processing and post-processing, the user friendliness that really sells, that really demands that people buy that. Otherwise, it's relatively easy to come up with the core thing. And since SILAM is essentially the core, we can use it in different areas. So why not we take the real computational part, the core of the whole thing from SILAM? And the prepons are obvious because we have openness. We have access to all the intermediate stages, which is not often available in commercial software. We don't often get access to the intermediate data that the program generates. We have the solution. But then if you are trying to research on the final element as a method, we need that access. We are not just using it as a black box that I plug in some variables and I get some solution. The other thing that's good about SILAM being used for a final element is that it permits me to think and formulate my algorithms and to code them at the metrics level. I really don't need to go on writing do-loops. Something that's very essential in the final element thing. And sparse matrix handling solution is also readily sitting there for me. So that gave rise to this motivation of some work. And I'm happy to mention that much of this work was done right here in IIT Bombay during my doctoral work. And I think I'm happy to again say that the whole work was done on SILAM. The entire PSD was completed on SILAM without any code from any other language. So just to say that this is possible. The aim and purpose was that can we characterize final element solutions instead of just solving them the conventionally way? Can I do some analysis on that? Study the model structure and develop some parametric expression. Can I identify the dependence of the final element solutions on those parameters? Which requires that I really go into the final element model and not just use it for many software. So I had to have access to those matrices sitting inside, which again brings us to the SILAM kind of framework. And then based on that we saw that we could reduce the computational effort and come up with better algorithms. For example, we had initially we started the work with an electrostatic problem. We had a composite domain system. We have two omega 1 and omega 2, two domains. The problem was that the permittivity of this omega 2 was tending. And the whole thing had to be solved repeatedly for different values of epsilon 2, means the permittivity of this omega 2. So it's not a simple linear problem. I can't really scale it because the thing is tending over only a subdomain. So the only way up till now possible was that go on solving, go on reconstructing the final element again and then solve it again and again. So these are the governing equations of the basic Poisson-Laplace equations. And we have the energy potential as normal. So we go about the conventional final element with those of you who are conversant with that. Others will just suffice to say that we have a model like this where we have certain structure in the coefficient matrices on either side. And then for that we could come up with a general expression of the solution that led us to a much smaller system. This M21 and M22 are much, much smaller in size than the original size of the final element model. Precisely there are the size of the number of nodes that you have on the interface of omega 1 and omega 2. Other than the whole number of nodes that you have in the union of omega 1 and omega 2. That's the major achievement in this particular case. So the salient features that made this thing permissible or we could identify was that we have a repetitive computation of varying permittivity. The variation of the final model was structured. It was not just randomly varying, which we could identify and we could use using CYLAB. The general expression of the final solution can be obtained. And then that leads us to much smaller models. So this is just an example result. This mesh was not done through CYLAB, sorry. This mesh was used, I got from a freely available on the net machine software which was called FreeFib. So this didn't require any writing of the course. We could just define the geometry and get it. This was also from a free software. So we have a 2D body here and you can see straight lines here. If you can look hard, you will identify a rectangular square area around the hole in the center. The mesh it can be identified. So that is omega 2 inside and the outside is omega 1. The hole is a real hole. So we have a boundary here. And then for this we had to get the potential over this domain for various values of epsilon 2 within this square, within this square line if you can just see. So and we got these results where the ratio was, when the ratio is same, ratio between omega 2 epsilon and epsilon 2 and epsilon 1, the permittivity is omega 2 and omega 1, you cannot distinguish between the two materials which is expected. So we have a smooth gradient and this graphics is also generated through the CYLAB, normal graphics commands. They are not part of CYLAB post-processing part. They are just normal things. And just a passing mention on this regard that we do have some sort of, at least some stage of finite element software being already developed in CYLAB. But when this work was done, it was not available. So the whole finite element code had to be written by myself in CYLAB language and the graphics are also from CYLAB. So as we change the permittivity of the inside area, the omega 2, we can identify these areas. So this is the ratio of 5. The permittivity of the inside to the outside is the ratio of 5. So we can identify the difference in the potential gradient on these two. And when you make it 20, it's even more sharp. So these things are clearly visible. And then the next thing that we went to was a time harmonic eddy current problem, which now means in complex quantities. The equation is now complex because it's time harmonic. So all equations, everything are now complex quantities. We have this equation here, which is a more generalized version of the Poisson equation. Here in case we don't vary only the epsilon, only the permittivity, we have three parameters to change. We need to do repetitively, we need to compute the automatic field over this whole area, where this is a plate and this is a bar carrying current. And then this has to be done repeatedly for various values of the exciting frequency, the permeability mu of this bar and the conductivity sigma of this bar. Here also we could do same kind of thing. Please note the metric structure that we could decipher of the finite element coefficient matrices here. And these came in really, really came in handy in doing this kind of analysis. Because we could pick up entire rows and columns, entire sub-metrices and work with them very easily, which we couldn't have done if we are not using something like this. Then we have a similar computational strategy. Here also we could come up with a smaller, much smaller model. Gamma is the product of the frequency and the conductivity. And for that, mu is the inverse of the permeability. And we have a much smaller system, which is again, in this case, the size is the number of nodes in the bar region only. And it's easy to appreciate that the number of nodes on the bar region would be quantity, means really smaller than the whole number of bars in the entire domain. So this system also reduced the whole problem significantly and then we could do it. So these are, once again, the simulation results, again drawn through silo. You could plot through the standard control commands. And this, as we vary the conductivity of the inside plate, we can see the skin effect. The magnetic field is generally throughout. This is one, we increase it and we increase it further. So the magnetic field is generally thrown out. And this is clearly coming both from the computation as from the graphics. Then third problem, you could then extend it to a circuit being coupled with these fields. So this region has to be solved to a finite element. These three bars are inter-coupled through the magnetic field. So we can't make lumped parameter approximations or rather we don't want to make that because of preserving accuracy. We want to make a finite element analysis of this part of the domain, then couple it with a circuit coming here, and then that circuit also has nonlinear elements, so triggered circuits. This whole thing was also, again, the analysis was done on the similar lines and then we could do it again. We had a very small system for this and this involved time stepping because it was not time harmonic. It was a triggered circuit and we had to do a proper transient simulation of this one. So we again use backward difference on the time axis and finite element to discretize the space. And then this also significantly reduces the size of the FA model. We could, from the original size 2989, which are the number of nodes in the whole domain, we could come down to a system of linear system of size only 27, which was to be solved at each time step, because this is the size that has to be solved, the size of the linear system that has to be solved at each time step, and that was reduced to 27. So if I have 500 time steps, I have 500 times this saving. So that was quite significant and it reduced the time significantly. The thing, it was achieved because much of the computationally heavy tasks, which was of a repetitive nature, but which we could identify was not really essential. We could shift it out of the time stepping loop to a sort of pre-processing stage, I mean, or pre-computation stage. And once we have done that, we have got the smaller implementations, then we go into the time-looping stage. So we also have certain variations. We can really compute only the values only at particular nodes, as in conventional software, the fin is solved over the entire domain. Even if I need it only over the three-bar regions, I have to compute it nevertheless for the whole domain. But this method allowed us to compute it only over a particular area, and then we could easily couple it to the external circuit. And we have the plots, once again from SyLab, for the currents and the voltages in one of those bars. One of those three bars, this is the current flowing through that, and then in the thing for the whole entire coupled system. So now let's come back to wrap up things. What are the issues in the finite element with SyLab? So we have FemLab coming up from the Matlab family, which is compatible with that, and it has all the plus points and all the minus points of Matlab. It's good, it's well done, it's user-friendly, it's easy to use, it's catchy, and then it's costly. As it's counter, but we have SyLab, so we have FreeFem coming up. So FreeFem is, again, trying to replace FemLab. The idea is this, that... But what are the decisions and improvements? Although I myself did not use much FreeFem because it was not available the time this work was done, this work was almost like a bit parallel to what was going on in the FreeFem group. So we need a more easier and more flexible pre-processor part. That's one of the... Pre-processor was one of the bottlenecks for any software. How easy it is to really feed the geometry, how easy it is to mesh it, and then of course what follows is the solution stage is relatively simpler. Because in SyLab we have, and in all of our internal labs, we have a whole plethora of iterative solvers, which is really easy. And with the kind of algorithm that has been proposed in this work, that's also easily doable with SyLab. So the gap is not there in the middle stage, the gap is on the pre- and the post-processing stages. So, and three-dimensional geometries, it would be nice if we can directly import CAD. I'm not very sure if it can already be done in FreeFem right now, but I don't think of three-dimensional geometries, I haven't heard of being used in FreeFem. If we can do that, it would be really nice. Otherwise, even provisions for importing data from CAD, or even other fundamental software which can do the mesh, can be done. So in my case also, after using this machine software, which is freely available, I also tried with using some standard software answers and all to get the mesh, and that's it. Just get the mesh from the standard software and then proceed to SyLab to do the rest of the part. We can of course have a better and a wider choice of iterative solvers. All sorts of conjugate on the entire conjugate gradient family and the other things which would help us. And then we can also continue to implement it. Really, this is crucial to improving the integration of the post-processing part. By integration, I mean integrating it with the secondary variables and the other things that I may want to get from that. So that would be really nice if we can make the packaging good on these two fronts, the pre-processing and the post-processing part. And then of course, as part of the post-processing, what I really mean more specifically, is easy coupling with other software which are sort of standard. Because it's seldom that a finite element analysis is done in isolation. It's almost always, either it's taking data from something else or it's giving data to something else. So we should be able to... There should be easy compatibility between circuit simulators, p-spice or everything or on the other mechanical structural software. So that was the idea just to conclude some remarks that we have problems. That's for sure. That's nobody's doubting that. But problems are again opportunities. We have problems with Sylab. So those are opportunities for contributions. We know people... For example, I'm working in finite elements. Somebody is working in some other area. So we know the problems with Sylab in those particular areas. But that's again our expertise. We can solve those problems in those particular areas because we have done it. And there's nothing that stops me. It's being a community, not a company. We can always contribute. So that's the idea. And we can have more community interactions because often it happens that we really don't know what some other Sylab user somewhere else is doing. And then we can have a more active networking of users. That would, in my opinion, accelerate the development. We are actually developing things. We may not always sharing it. So this will facilitate more sharing. And I'm saying if we have the computational code, pre-processing and post-processing is not much of an issue really. It's really an issue of sitting down and doing some GUIs. And then if this thing is done, then all those areas that were mentioned in the previous, in the first one or two slides, which really use finite element because finite element is one of the sort of pedestals of computational methods used in the engineering sciences. So with that, I would like to conclude my presentation.