 Čakaj za imetnje, za poslutnje, da imaš zelo. Znamo tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi tudi zelo. Tukaj je zelo vsega vrtega. Zelo se, da je vsega vsega vrtega. In zelo skupam, da je točen je vse nekaj odegrat, skupam, da je točen jezerno nekaj odegrat. Tudi nereči se, da je točeno, Je ni napravljeno, da smo bila nouserlja in tačnje lažijo radnjav bermiči. Vse možemo ročne obrana, ne znamo, da ne bo ta komponent, kaj je zezet, zezet, in komponent, zezet, vzelo komponent, ker je pravda izvizible z tajpov. In je zelo, da je to nekaj zelo, da je taj zelo, zelo je nekaj nekaj nekaj obzir, kaj je tundilje in tjup. Tjup skupaj vsega vsega vsega vsega vsega vsega. Tundilje vsega vsega vsega vsega. In maybe it's not a well visible, but these authors, I think it was about the network obtained from Alexa engine, which now doesn't exist. In vidiš, da je tudi nr. 56 milijon norskih komponentov, tudi norskih norskih komponentov, tudi nr. 44 milijon norskih norskih norskih norskih norskih norskih norskih norskih norskih. Vbaguče imate pri forbinačenju, vse preddjeli, da je tudi nr. 44 milijon norskih komponentov, norskih informacijov. Vse tu tudi norskih komponentov norskih komponentov, in zato vidim tudi, da je tako zropačne in vse materije. Tako, da je tudi moja bliprstvina sej, več, bo prijeva, okaj da je jo, povihzel, ta naša dobro, ki je vse naša, koliko je dobro za to staro. In misli, da je povihzela, ko naša raz ощущ, način, da se je vse občasnje, je to možete, da se je zelo jazna, ampak izgleda vzljela, vzljela taj bolj. Vsega komponenta je veliko nekaj, kaj komponenta in veliko. Zato za vsega komponenta zelo začeličili, da je imela začela površenje. Zelo smo površena za 3-like, za lokalj 3-like graf, ali je bilo konveno, da je zelo vzelo vzelo, tako vzelo v zelo, na in in in vzelo, tako je bitno vzelo. But nonetheless, let me return to the first picture and try to understand how the standouts are organized. Or better to say how the in komponencija vzglednja, je vzglednjena, načo je vzglednja, in vzglednja komponencija. Vzglednja, tvoj izglednja, je zelo, kako je izglednja, kako je izglednja. Včetnji ne zelo vse ne zelo prišličiti, da je obču, kar je početno, da je to vse nekaj, da je to, da je to, da je to, da je to, da je to, da je to, da je to. In prizaj nang Metron of these objects, standards and tubes in different layers in different networks. In se sem bi sestati organizacje in statištike, zato veux da igram te stand-ups. In nekaj reprezentacij, ki je tudi, početno, početno vedno v tega vzvršča, kaj je zelo vzvrščo. In se zelo vzvrščo, tudi je tudi, tudi je vzvrščo, in v tem zelo, ki so vzivati. Zato ne zelo, da imamo vse algoritm, kako smo vzivali, ali je ideja, da je vse schem, je ta izgleda kompletne kompetencije z vsebnjev komponente, z vsebnjev komponente, vse nekaj nekaj, od vsem vsebnjev komponente z vsebnjev kako sem nekaj, Tako, to je zelo, kaj je potrebno pošličiti na netrof. In, kako smo prišličili taj tendel, smo pošličili. Sopošliš, da smo prišličili tendel v prvom ljudi. To je... Kaj je bilo? Poživajte na svoj netvorek in poživajte rečenje načinje načinje unifikacije kaj je stajne nječ in šalječnje komponente. In potem aircraftnje tez, kaj je vsičen in vse komponente v tem najbolj nečo stonesov. Časno zač Mart podjeljene z inrvosti. Zato je naprej, in out komponente, odličite in in out komponente. In result bi jaz komponente čudovite in in out and overlapped in out tubes.Applause . So it's a very easy algorithm. And you can apply it directly to a network. And you can combine it with message passing calculations. We made it in both ways. So here the pictures showing sizes of in, pri nekaj lahko pošličnjih potrednjih, za tri netvorke, za rektarnega kratke, za neko vzelo dobro, p-to-p-netvorke, načo nekaj ne prejde, in nekaj nekaj nekaj pošličnjih. Zato smo se jaz vseh, da so vseh, da je vseh, da je se metrične zeljene, kde najčuljene in nečaljne komponente imelijo. In zelo, kaj je več vzvijati komponent, je sezato, da je zelo jaz jaz vseh. Ja poček, da je nečaljne komponente, In je kreativnja indikta, kaj je tudi, kaj je tudi, kaj je tudi. Vse gledajte. In tukaj, tukaj, zelo videl, da je to več stranje, vzvečne vzvečne komponente. Tako, tukaj je zelo vsega algoritm. Nr. tendrelje v nekaj nekaj ležnih. In za te dve netorka, vse vidimo, da vse p je zelo izgleda in izgleda 1 minus p vertice v random. P je različ, da je to na SEA legand, vzpeči tzvom tendel. In jih tzvom tendel zelo vse nrv srtače. Našli, nekaj je vziv. Kako smo prišli vziv, jazem je 10 nekaj nekaj nekaj nekaj netvor, kaj je Nutella. In nekaj nekaj je zelo vziv, kaj je vziv taj algoritm vziv, tudi tudi, tudi tudi, tudi tudi in tudi v tabu, tudi tudi v tabu. Tudi tudi v tabu v vrtičenih tudi tudi. Vse vsi videl, da ne je vzvolj neselo, prakšt kaj je vzvolj tudi v tabu. And when we approaching the network, the threshold, the point of birth of the of this gene components, of directed gene components, you see that the number of tudi, da je tudi vseh ljudi in tudi vseh ljudi, tudi tudi vseh ljudi, tudi vseh ljudi. Tu se, da je tudi ljudi ljudi, vseč, da boš pošljava netvorega, vseč, da boš pošljava netvorega, Sreč smo tudi repejno odpozorili tega vrtečnja za mnogočnega zeločnega. Taj nič nekaj nekaj nekaj nekaj nekaj invarstvih vrtečnih. Tako, je bilo tudi o stavljenju, tudi o organizacijju, tudi o statičnih, tudi o tudi o tudi. In tukaj poživam druga pravda v moj tukaj, na semplijske komplecije in manjiforci. In kako smo modljali in kako smo generali manjiforci in vsečenje, z vsečenjem ideja, z komplecijne netvore. Tako, je bilo. Zdaj je bilo, da je tudi tukaj, če je to zelo. In je, da je tukaj, tukaj je vzvečen, vzvečen kaj je zelo, kaj je zelo, je d plus 1, klik, simplex is d plus one, click graph. But most of our results were about surfaces and their triangulations. It's simply a particular case of manifold. And this sounds pathetic, but it's right. And triangulation is a particular case of planar graph, but you understand that it's very strong constraint, that it is this constraint that it consists of only of tangles, there's no squares, nothing more. And we will see that it's a change very strongly the space dimension of these objects, because from quantum gravity it's well known that typical random planars are four dimensional. I mean that their Hausdorff dimension is four, typically. And here it will be not the case. So the first work in this direction was this one, and the idea was to grow triangulations from their borders in the following way. We simply attach triangles, or we create triangles, we attach a triangle to a border, or we connect two second neighbors by a new edge and creating a new face. And actually when we treat these networks by using simple ideas from complex networks where all the time have a temptation to forget that triangulations are actually about their faces. But we want to speak about vertices, but we must be careful. We will later see that it's dangerous. So three questions. The first is about local structure. It's a bit boring about the degree distribution, clustering, coefficient, and correlations of degrees. It's not very interesting, and it can be made analytically exactly explicitly. But the second question is the most important in actually about networks. What is its dimension? Because this is what differs most impressively one network from another. Small world network, small world or large world, how large. And two leading dimensions which we are interested in will be Hausdorff and spectral dimension. This Hausdorff is, in our case, it coincides with Hausdorff dimension, what we will discuss, but Hausdorff dimension will be something different. And the third point is topology. Topology is in terms of real topology, in terms of standard about the property for space which are preserved with continuous deformations. How this sphere with many handles. And changing, and we will have objects of this kind. So, our idea is to consider all possible transformations of triangulations, which keep there to be triangulations. And the most known transformation, and the minimal transformation of this kind is so-called partner moves. But you see that there are three of them and people believe that it's really minimal, but it's not quite right, because there is this transformation splitting of two adjacent edges in this transformation. So, you create two new phases here, or you eliminate these two phases, you merge them. So, there are only two moves and you can use this transformation to transform, to make any possible transformations without the same topology. Suppose that we have triangulations homomorphic to sphere. If we have another triangulation of this kind, then we can find a chain of transformations of this kind, for example, or partner moves, or combinations of both of them, and we transform them and continue step by step. But if we have triangulation homomorphic to torus, then we cannot move in this way to triangulations homomorphic to sphere. So, this is very naive version of local curvature, which coincides with simply degree of a vertex. When degree is 6, you have a locally, the local curvature is zero. So, we can say that when we have say a hub, it means we have a very curved space in this place. And how our models are organized in very simple way. We simply choose some vertex, or edge, or face, or some local environment of these objects. And we choose them at random, uniformly at random, or we choose them with some preference in preferential way. And then we apply to this local. Group we apply these transformations or some other transformations of this kind. And in this way, we can create a zoom, we can develop a zoom of various models of this kind, and we choose them simply very different because we wanted to get different space and house door of dimensions. We hoped first, when we studied that, that we will get just house door of dimension 4 as in standard random planar graphs in quantum gravity. The idea was that we should get 4. So, we played with different transformations and some of them they produce equilibrium models. It's small, but maybe I can say by words, you simply choose some triangle, some face and insert into it a new vertex and create inside 3 faces. And you can make it in both ways by randomly choosing this triangle. For example, it's one possibility to get equilibrium. And another possibility is to make the following. Suppose that you have a triangulation, then you choose at random, for example uniformly at random 2 faces and merge them together. These faces, they can be very far from each other on opposite side of your manifold. And you merge together and then you eliminate these 2 faces, annihilate them. Then you will have already a hole, a warm hole, you would better say, as in a tube inside of this surface. So, you will change topology from, say from triangulation, to sphere. You will get triangulation, to torus. And so on. And then increase more and more. So, it's about the problem with this, with dimensions, that there is no analytical way to find them. All the results known for this, for household action, for spectral dimensions. Also, they were obtained for only for trees or for some very exotic triangulation, say, or more general, for exotic simpliscial complexes, which can be reduced to trees. So, unfortunately, there is only one way to make simulations, to generate very large network and to measure and to look how the number of vertices within the sphere in this network of radios are, where I saw this radios, how it grows. At first sight, it's for different models from previous slide, but it's not important. At first sight, on them, they look like Paolo, but so at first sight, paolo should be finite dimension, but when you look at more carefully and look at log-log plot of the same, of the same figure, you will see that some of them, they never have, even for very large networks, they will have no this plateau part. This part just shows that we have already finite dimensional object. And when we have this peak, it means that maybe this plateau will measure if we pass to very large networks, which are not approachable in simulations. In simulations, our students made it up to 10 power 7. So it's not enough. And the same can be made for spectral dimensions for ordinary Laplace operator. We look how it behaves at small values of Eigen value. By the way, it's not quite trivial, because it turns out that close to at the range of very small Eigen values, there is some deviation from linear behavior to finite size effect, which is not only affects the position of the lowest Eigen, the second lowest, the lowest is zero, as you know. But it's only changed its porous linear dependence close to the smallest Eigen vector. But nonetheless, it's just show. And maybe I immediately tell about these networks with varying topology. We simply generated it by using combination of some of these transformations like Pachner moves or other ones. We just matched and annihilated phases like I told. And the number of these warm holes grew in linear way in this way. And you see that degree distributions, I simply didn't show these degree distributions for our models, which were obtained analytically. For this model, I was lazy to get it. It was possible to make it analytically, by the way, exactly. But I didn't make it. It's a result of simulations. And you see some hint on condensation phenomena, actually, unfortunately analytically, I didn't check. I think that actually there is no condensation still. So it's for future work. But this just shows a set of for various models from the Zoom, a set of Hausdorff and spectral dimensions, as it should be spectral dimension, cannot be larger than Hausdorff. But it was possible to observe that we have a spectrum of different Hausdorff and spectral dimensions. And there are not four, for example, some of them can be smaller. And actually we can even varying parameters of our models. We can vary continuously these dimensions in some cases. But this is more representative. For equilibrium triangulations, we had no solid results for Hausdorff. We had no dimensions because our equilibrium networks were essentially smaller than the growing one, because for equilibrium you must relax your system. And it turned out that this relaxation is very slow. And it was difficult to approach it. But nonetheless, for this model, we can describe you for creating a vertex within a face and vice versa. And we get kind of Hausdorff dimension. This was about two. And spectral is about 1.4. And I can indicate some parallel with equilibrium random trees. Ensemble of equilibrium random trees. It's simply you consider all possible trees. For example of given size. And each member of this ensemble has the same statistical weight. For example 1. And for them it's known that Hausdorff dimension is 2. It's 2 dimensional graph in Hausdorff dimension in sense. And spectral dimension is 4 divided by 3. Here we have kind of that. I don't want to make some intermediate conclusions. But I simply pass to another problem concerned some kind of surfaces and networks which I enclosed within these surfaces. But it's very different problem. And it's related to self-assembling systems. The point is that now in nanotechnology when they produce drugs and they use the following trick. They are trying to create some kind of boxes, small boxes with holes into which they put the drugs and kind of that. So what they make in abstract terms they simply cut. They prepare the flat cut of the shell and then they heat it in some way or apply some chemical reaction and it falls into their box. And unfortunately I have forgotten to put here very impressive picture from the cover of one of the last nature issues. And just they produce beautiful. But so there is some practical problem of this kind. Yes, but the problem that it turned out that it's very important to prepare this net in terms of this self-folding, this flat cut is called net and the result is called shell. So it's important to prepare this net in such a way, in such a form that you will get the maximal yield of your technological process. And it was found that the maximal yield occurs in the following situation. If you apply two design rules, the first rule is simply a geometrical one. You demand that the maximal, that the number of vertices with a single edge cut should be maximum. And here it's just to demonstrate to you what does it mean in their terms, in standard terms of the science or does it mean single edge cut. Here are vertices with single edge cut. So there's the number of their phrases in net is the same as in shell. So it's clear. And so our aim will be to find the full set of nets which produces this maximal number of these vertices. And then after we have this set built, created by using this constraint, already we can apply any other rule which depends on technology, actually. It turns out again that by some reasons in technology they like, for example, the demand, the minimal radius of duration of this net, but it's not necessary already. So we were interested in the first rule, the design rule. And step one is NP-hard. And up to now what they made, they simply generated all possible nets at random and they chose and they selected of them this maximal edge cut nets. And mostly first they started with very small shells and it worked, but they proceed up to now, up to very recent. Before us they made it to already rather light shells. But it turned out that the number of nets is huge. I will show that it can be 10 power 8 for other compact shells. And so you cannot solve this problem in this way. So we developed a very simple algorithm which allowed you to find this complete set of these nets for any shell of reasonable size and if you want it to be huge, then our algorithm can be very easily modified and it allows you to find these optimal nets with probability, with very high probability which goes to zero. With very high probability which goes to one. I wanted to say, I wanted to deceive you, apparently. Yes. So the idea is very simple. The first desire when you have this graph is to treat it as each its face to treat it as a vertex and each edge between these shells treat it to the edge. So it is too... Yes, but it turned out that it doesn't work in this way and better simply to consider simply a shell graph consisting of its vertices and original vertices and original edges. And it turned out that this design rule it simply corresponds to finding the set of maximum leaf spanning trees of this graph. So the idea is simply almost trivial and it turns out that then we can make it by using this idea simply for up to or rather high size shells we can find completely the set. And this is, for example, is our result for finding for rather not very large shells and you see it's about the fraction of spanning trees with maximum leaf spanning trees and this is the complete number of spanning trees and you see that this number this fraction can be extremely small and what normally people made up to now they simply they try to simulate and to expect at random this to search and to simply one by one choosing this spanning trees to find the maximum leaf spanning trees which was very strange. So this you see that for different shapes for different shells it's this curve these dots are around this line this line is obtained analytically kind of estimate. So another, in this case this is even a better agreement with these dots and our estimate. Yes, and because it was just we submitted it before the World Soccer Cup so we was example for soccer ball and already we applied second part of the algorithm with the minimizing the duration ratio and you see that even here situation is not quite trivial you see that very close to zero it's kind of a derivative you can treat it as infinite zero so the optimal the optimal net to this point is just the most compact in space and the second net this is in the middle and this is the worst in the complete set of isomorphic to each other the nets because we have very large number of nets and many of them are isomorphic of them are four thousand with maximum so yes, so I completed these three parts of my talk and I'm a bit shocked what to say in conclusion because they are kind of orthogonal one to each other but because of these nice words I would conclude with saying that these words of 1999 1998 and now they are completely absurd it's a joke so thousands tens of thousands or even more of completely academic works about this topic and one of them was presented in the first third of my talk so this leaf algorithm we think it's the most efficient at the moment or if you want to solve low probabilities no, but it's not simply efficient it simply allows you to find it completely for any reasonably for any sense of reasonable size for any sense which they use in technology and this algorithm of course first house was documented in a cluster but he improved and proved and proved and now it runs in this kind of laptops but I can only repeat that it's possible to make it for very large sense by using already some approximate version of this algorithm actually we have this version but simply there is no need for them just now because we solve this problem for any reasonably for practical needs and we even try to get a patent for this algorithm but it's still in process so what was your second part one of the ensembles he was showing at the large plateau what is this ensembles how do you hold one of the rules this is growing say this is one second this is the growing the growing second and it's just the growth was based on selection on selection of first you select random vertex then you select to its random edge and then you split them into two this is growing this way but the point is that this large plateau it shows that even when need approach this already that you can see even in rather small graphs because for example if you this plot is for finite size graphs and for all them they are of the same size if you increase size of graph you will simply have this plateau bigger and if you have graph infinite you have this plateau infinite but for example here already you cannot say if it will be this plateau it's growth perfect it's simply one of models you can generate them question from data scientist for the second part of the talk can we imagine application of these evolving manifolds for learning data manifolds constructing, growing approximators maybe you know such I don't know I don't know and one point about which I didn't say you this when I speak about this triangulation I assume that I ignored this sizes of the edges in principle when you want to triangulate any body of course all these edges must be different I ignore this in this models but in principle we know how to consider this but in your case I think it's just also you ignore also yes but I don't know I have no idea