 Wel thank you to Enzo, and all four organisers for convening this meeting, it looks like a great line up and I'm looking forward to hearing the talks this week, and thanks especially to Christian for inviting me. So I'd like to present so moving to computational chemistry now. A computational study of not formation in clusters and I think the topic of clusters makes this a unique talk this week. I don't think anybody else is talking about clusters, but the particles in the cluster are not permanently bonded to each other. So this example falls into the category of cases where the chain through space, the curve through space that hosts the knot has to form at the same time as the topology itself, unlike in a polymeric molecule where one already has a chain. The techniques that we're using to study these clusters are drawn from energy landscape theory. I think some of the terminology of energy landscapes is commonly used to understand various phenomena in chemistry, in physics and related fields. But I think relatively few people are using the detailed tools of the theory as their primary source of attack on a problem. So I'll take a somewhat didactic approach in the hope of explaining how energy landscape theory can be used to tackle some things that will be otherwise very difficult to do for this sort of system. And they're more generally applicable methods, so it's not just confined to this particular topic. I've given my talk a little subtitle, which you can take as my content slide. I'll be telling you about the structures that we observe, but also the pathway for their formation and the methods used to study them. Here are my collaborators on this project. Chief amongst them is David Wales. David, back in the day, was my PhD supervisor, but we've continued collaborating from time to time on projects like this one. Special credit goes to James Farrell because he produced some of the graphics that I'll be using in this talk. So this all started when I was thinking about dipolar fluids, fluids of dipolar particles, in particular the phase diagram of dipolar particles, which is still a controversial topic, and also gels that form from branch networks of dipolar particles. So rather difficult things to deal with, but I grew up studying clusters, and one of the motivations for studying clusters is that by understanding how a small group of particles interact and how they behave, you can start understanding a bit more about bulk phases, which generally involve many more particles and can be harder to deal with. So I decided to look at clusters of dipolar particles as well. For my potential, I took this thing here, the stockpair potential, which is probably one of the simplest models with isotropic attraction that you can have of dipolar particles. It consists of a Lennard-Jones part, standard Lennard-Jones potential, which is isotropic and encourages the formation of compact, highly coordinated structures, often with icosahedral symmetry. And added to that is this term, which is the interaction between two-point dipoles, and dipoles, as we know, like to form chains head to tail. And those two things are not very compatible. So this anisotropic potential, the strength of that is controlled by the scalar parameter mu. When mu is zero, we just have a plain Lennard-Jones potential, and increasing mu causes this term to dominate. So each particle has five degrees of freedom, three translational, and two orientational. This potential, I think, was introduced as a model of dipolar molecules. However, like many simple potentials, it's maybe becoming important again in a different context, in the context of colloidal materials. This is a snapshot of some colloidal particles that are interacting by magnetic dipoles. The formula for interaction of magnetic dipoles is just the same as electric dipoles. They like to form chains. One of the nice things about working with colloids is that you can control the interactions between the particles much more finely than you can with molecules without changing the molecule. For example, for colloids you can have controllable strength and range of attraction using depletion effects, the size and concentration of a polymer, or the solvent quality, or by grafting polymers onto the surface of the colloids themselves. Last week I was at a meeting in Vienna on structure formation in soft colloids, where I spoke to Albert Philipser, who's from Utrecht. Albert is a very distinguished colloid experimentalist, and he said that it shouldn't be too hard to create colloidal particles that have something reasonably closely approaching this sort of potential by using grafted polymers. He pointed out that actually the experimentalists spend quite a lot of time trying to suppress the isotropic attraction in their colloids, whereas the predictions of this work are actually that that's essential for some of the interesting effects. For normal nanogene systems we normally work in reduced units where the well depth, the pair well depth is the energy, the dipolar term here modifies that well depth considerably, so I'll be using throughout a thing I'm going to call Epsilon Star, which is the modified pair well depth and the most favorable configuration head to tail. We've got a cluster of these stop-mare particles. The first question to ask is what structures do they like to form, and generally that question is referring to the lowest energy structure. We need to globally optimise this rather frustrated system, which is trying to do two things at once. For a cluster of n particles, which I will denote like this, ST subscript n, there are five n degrees of freedom, so we have a very high dimensional function, and in general the number of metastable states on a potential energy surface increases literally exponentially with the number of particles, so this rapidly becomes astronomical. Finding the lowest energy structure is a difficult problem. A common approach to that is simulated annealing, generally slowly cooling down a system, so if you have a sort of cartoon energy landscape like this, you hope that by cooling it gradually it'll find its way to the lowest energy structure. But if the landscape looks a bit like this, of course one could end up there, and then things, the system would be too cold to overcome the barrier to the correct structure. We can anticipate that this is going to be the situation in the system I've just described because of the competing effects, so simulated annealing is very likely to fail. A big step forward was made in global optimisation with the introduction of the base and hopping algorithm by David Wales nearly 20 years ago, and the idea is quite simple. It's a bit like doing an ordinary Monte Carlo simulation where a small random perturbation is made to a structure, and then the change in energy that that causes is compared to a thermal energy to decide whether to accept or reject the step. The only difference in base and hopping is to insert an additional step, which is a local minimisation, and it's the locally minimised energies that are compared when deciding whether to accept or reject a step. This has the effect of transforming the potential energy surface into a set of plateaus because all points within the basin of attraction of a given minimum will be mapped on to the energy of that minimum. This has the obvious advantage of removing the local barriers between adjacent minima, which should help. But it still doesn't explain how you'd cope with a surface like this because if you want to get there from here, you still have to overcome a barrier. So although this isn't often covered, I think it's useful to understand why base and hopping really is successful. We can do that using a test case, the case of a 38 atom cluster of just pure Lenard Jones particles. We'll come back to this in the Stockmayer case to see what happens when we add the dipoles. But it turns out that for this particular cluster, the lowest energy structure is a chunk of face-centred cubic lattice. It looks like this, but there are a vast number of competing low energy structures which are packed in a Nycosahedral motif. The potential energy surface is rather like this. There are different parts of it which can't be reached very easily from each other. So one reason why base and hopping works is it requires us to take a higher dimensional view. If we go now to a two-dimensional cartoon, if we imagine trying to get from one minimum to another, typically we'd expect that to go via the saddle point here. But by mapping all points within the basin of attraction onto a plateau like this, hopefully, yes, can't see the colours from here, all points within that basin have the same energy, same here. That means that the interface between those two basins of attraction is now actually a hyperplane. And in the more general case, the dimensionality of that plane is just one less than the dimensionality of the full space. So this makes moving between minima somewhat easier. But there's also a thermodynamic reason. If we look at the population of different families of structures as a function of temperature, for the original cluster, at low temperatures, the face-entered cubic structures are most probable by definition, the global minimum, as we increase the temperature, icosahedral structures take over, and then disordered liquid-like structures from them. The problem is you can see that there's a very narrow region where the liquid-like and the FCC structures have a significant probability at the same temperature. So as we cool down from high temperature, there's a very small chance we'll actually find ourselves in the right place for global optimisation. And that shows up as a solid transition in the heat capacity before the main melting transition. So this is why that's a challenging case or used to be regarded as a challenging case for global optimisation. In the transformed surface, we can look at the thermodynamics of the transformed surface. The effect is to broaden these probability peaks so that the switch over from FCC to liquid-like has a large region here where both sets of structures have a reasonable probability. And that means that it's possible to find the way to the global minimum much more easily. And the heat capacity has been broadened into a very, very wide peak without this low-temperature transition. So let's take a look at applying that to the stock-mare clusters with the dipoles. Let's take the case of 13 particles as an example. A very low dipole moment, all that happens is that the dipoles have to sort of decorate the Lener Jones structure. So if we start off with the icosahedron, and now the colours here correspond to the direction of the dipoles, you can see they just sort of form some loops. The symmetry has been slightly distorted from IH to D3. At the opposite extreme, for a very strong dipole moment, of course we get a ring. A chain forms and that chain closes up to get the last pairwise energy back. And in between we see global minima that are sort of intermediate. So here two stacked six-membered rings. This particle in the middle is unhappy with regards to its dipole. It doesn't know which way to point, but it gets a lot of Lener Jones energy back because it's got a high coordination. Eventually it pops out to make two staggered rings. That sort of sequence from compact to ring-like is quite characteristic. Here's a structural map for the structures that we see. For clusters up to 55 atoms, 55 is a magic number for the Lener Jones system. This is the strength of the dipole moment. As expected, for a low dipole moment we see relaxed Lener Jones structures. But sufficiently large dipole moments we see rings beneath that stacked rings. The stacked rings are more curved, which is less good for the dipole moment, but the stacking gives us back some of the Lener Jones energy. The interesting stuff really happens in between, where neither the isotropic part of the potential nor the highly directional part can dominate over the other. That's where we start seeing knots and links. I hadn't anticipated this in advance, but of course it does make intuitive sense in retrospect, because by forming a chain, the dipolar part of the potential is at least partially satisfied, but by ravelling that chain up into a compact structure, some of the isotropic potential is also satisfied. So it's a highly frustrated system, but in the end it turns out that a knot can be energetically optimal. Here's a gallery of some of the structures that we've seen in that system. Here's an example of the simplest topology, the trefoil knot. This is coming back to the 38 atom cluster. This was the one that had the challenging behaviour even without the dipoles. That cluster can actually form two different kinds of knots. A relatively high dipole moment will form a trefoil, because it can do that with quite an open structure where the chain doesn't have to bend too much, but at a weaker dipole moment it'll actually form a more complicated 819 knot. This is more tightly curved, which is less good for the dipole moment, but if the dipole is weaker then that can be traded off against having a more compact structure which is favoured by the Lennard-Jones potential. You can even see how the Lennard-Jones potential gets its way. For example, in this structure the colour now is changing smoothly along the chain to highlight the chain, but you can see that this particle here is surrounded by a very neat hexagon of particles, which is exactly the sort of structure that the Lennard-Jones potential tries to encourage. Here in this cluster you can see a little tetrahedron, another very strong Lennard-Jones motif. This structure, although it's actually an un knot, is really a set of face-sharing tetrahedra that are twisting round in a wreath. We also see some linkage structures, the stacked rings are just a trivial unlink, but we even see some three-member links. Again, you can see how simple things like simulated annealing might struggle with this sort of thing. I should say a word about how we identify the structures. We do so just using the Jones polynomial, which was adequate for our level of complexity. It's important to be able to define the chain though, because remember these particles are not permanently bonded, so how do we extract a definition of the chain from it unambiguously? We're quite strict about this, and it can be done with just one parameter defining nearest neighbours of the particles. If we take a given particle and ask which of its neighbours in its northern hemisphere has the most aligned dipole using dot product, and take that to be the next particle in the chain and keep going like that, we could define a chain. There's no guarantee that doing that in the reverse direction would reproduce the same chain. In general, some organisation of dipolar particles will not produce a self-consistent chain if you go backwards and forwards. By the time that that does work, that's going forwards and backwards produces the same results, actually the chain is very well-defined and intuitively is what you would pick out if you looked at it, and that can be done in an automated way unambiguously. I think it would be important to look at what sort of knots we can form. We've seen some rather complicated ones up to ten crossings. Certainly the tourist knots are common amongst the ones that we observe. Here's the 819 knot drawn as an idealised tourist knot with and without the doughnut, and here's one of the clusters that we observe with that topology. You can see that it's actually adopted the same sort of configuration as the tourist rendition of the 819 knot, when it's very clear that the tourist is there. This actually has point group C2, a real rigorous symmetry to it. As David has pointed out, this is the same topology as Christian and his students at Guido Polles have observed and published last year. From rather different fragments, these are helical fragments which have self-assembled into the same topology. It's interesting maybe to dwell on which structures are favoured and which aren't. Out of the tourist knots that are out there, we've seen trefoil 5.1, 7.1, 8.9, not the 9.1 for some reason. They also don't appear in the order of complexity. For example, the 7.1 knot is relatively rare, and we saw the 10 crossing knot before the 8 crossing knot. It is obviously to do with packing. Here's a knot that isn't a tourist knot, but it has good stability because it lends itself to a regular packing. You can see effectively two interlocking helices here. That suggests that not all knots are equally designable, and it depends on how the topology lends itself to a structural manifestation. We see some links. One of them is a tourist link, this three-component link. This looks like it might be a tourist link, but in fact it isn't. It's the Solomon link, and this is actually quite common. We see that one a lot. This example has a strict C2 symmetry, a C2 point group. I think the Boromian links, although it would be nice to see, are probably a bit unlikely because of the less compact nature of the way that they would have to form, but there are always surprises. I thought it was a force to the conclusion that some topologies are more amenable to symmetric packings than others, and the connection between symmetry and low energy is another intriguing one which maybe we can take up over a beer one evening. That's the structural side of things. What about going beyond that? What if we want to ask questions like, well, can those low energy structures actually be reached, and are there other competing metastable structures out there that are important? This is where energy landscape ideas really come to the fore. I think when most people talk about energy landscapes, what they're really referring to is a free energy landscape. Free energy intrinsically involves integrating over some coordinates, effectively to get a partition function, which is parametric in some variables, and the logarithm of that is a free energy. Here's just one example of a free energy landscape taken from my own work with Ivan a few years ago, Ivan Kolutza, where we were looking at the cooperative folding and binding of a protein, and there we knew that we were interested in how far the protein was from the surface and how many native contacts it had as a measure of how folded it was. We knew in advance what to look for, and we were able to average over all other coordinates in order to get a two-dimensional free energy landscape with our two-order parameters like this, and then the colour represents the free energy. That obviously reduces the dimensionality of the problem, makes it more plottable, but what if you don't know what the relevant structural parameters are? That's one advantage of working with the potential energy landscape rather than the free energy landscape. There we deal with the full, in this case, 5n dimensional surface, which can be characterised by focusing on the stationary points. The most important stationary points are the minima, which are metastable structures. After that, it's the first-order saddle points, which are in a chemist's language, the transition states. They're the saddles with exactly one negative Hessian eigenvalue, and they connect the minima. You could imagine plotting out a potential energy surface as a graph where the minima are the vertices, and the saddle points are the edges. This would require building up a database of transition states and minima. This is a difficult task. Normally the number of such structures is astronomical. We can't hope to get them all, so we need a way of focusing on the important ones, which are generally the lower energy ones. Once we have that, we can try and see how the landscape is organised at a global level, and we can also use the database to generate approximate thermodynamics and dynamics if we have a simple enough model for the density of states at the stationary points. First of all, how, in principle, would we be able to visualise this very high-dimensional object? That can be done using disconnectivity graphs. The idea of disconnectivity graphs is to group minima together into sets that can be mutually interconverted by pathways that lie entirely below a certain energy threshold. I think the lines are a little bit faint on this diagram. Maybe you can just about make them out. If I've got a sketch potential energy surface that looks like this, and I choose an energy threshold up here, all these minima are mutually accessible by paths that never exceed the threshold that I've chosen. They get represented by a single vertex on this disconnectivity graph. If I choose a lower threshold, you can see that these minima start getting cut off. This one can't be converted to that one without exceeding my threshold. I'll get several vertices at that level. They're joined by edges to the parent vertex from which they came. If we have a potential energy surface that figuratively looks like this, like a funnel, a set of convergent pathways down to the minimum, then the graph, the disconnectivity graph, will come out looking like this tree here with short branches coming off at the sides and leading down to the minimum. An example of a system that behaves like that is simple linear Jones clusters with complete outer shells that readily find their structure. It's possible that there's a set of convergent pathways to the global minimum, but the barriers are bigger. In this case, the branches will just be longer and droop down alongside the main stem. A good example of that is the Buckminster Fullerine molecule, where you can move hexagons and pentagons around the surface by twisting carbon-carbon bonds, but that's energetically very expensive. Or you might find yourself with a situation where there are many low-lying minima which are competing energetically separated by a hierarchy of barriers, and then the disconnectivity graph looks qualitatively different. An example of a system like that is water clusters. Water clusters have very rough energy landscapes. The advantage of this is you can always draw it in two dimensions. This only works for a single variable. To be able to plot these graphs, we need to build up a database of minima and transition states, which is a difficult job. I'll explain how we do that. Again, these techniques are applicable to lots of other systems, not just knotted clusters. Now, although this is a technical point, it turns out to be absolutely crucial. It held us up for a long time. Having an appropriate coordinate system, you might think that for a dipolar molecule, cylindrical, spherical polars would be ideal, just the right number of variables. But there are the usual problems with spherical polars, the redundancy of phi when theta is 0 or 2 or pi, and the fact that theta really shouldn't go beyond pi. The breakthrough here was made by Dwipan Chakrabati, one of the co-authors on our paper on the knots, who worked out a general way of doing this using an angle axis framework. You can describe the orientation of any rigid body to make it taking a reference orientation, and then defining a rotation axis with a vector, and using the magnitude of the vector to tell you how large the angle of rotation should be. Although this introduces an additional redundant variable, it turns out that you can calculate the zero eigenvalue modes analytically and project them out during optimizations. For the more you can deal with all the nasty derivatives from the coordinate system separately, and use this for any rigid body with sight-site interactions, even if they're anisotropic sites like the dipole. That was a really important and very general breakthrough. To build up the database, remember where we are, we were starting off with maybe just a few very different structures that we found from the global optimization technique, some low-lying structures. We need to connect them up. They may be very far apart into configuration space. We do this using a family of methods called nudge-delastic-band methods. The idea is to take the two points in configuration space now on the two dimensions of the board and draw, first of all, a straight line between them. You've got to imagine that this is a high-dimensional space. All the particles are just going to move in straight lines between where they were and where they should be in the two structures. This obviously involves some very high energies. We place a set of replicas of the system. Each one is a copy of the entire system. Regular points along this path, smoothly interpolating, and connect these replicas by artificial springs. Then we allow the springs to relax on the energy landscape going downhill towards minima and transition states. You've got to imagine some complex set of contours on this plot. The springs help to stop the replicas all just piling into the nearest minima at the bottom and give us an approximate pathway between these two very distant minima that is forced to go over at least some of the saddle points. It won't go neatly through the exact path, but it gives us a starting point. It should go close to a sequence of minima and transition states that connect the structures we're interested in. This will be only approximate. Inside your nudging is the elastic band business. You have to be quite careful about which components of the force you allow to take effect. The next thing to do is to tighten up the transition states. If we're passing by a near transition state, we need to locally optimise that. We need to walk uphill in exactly one direction, in a soft mode and downhill in all other directions. This can be done without explicit calculation of the Hessian using hybrid methods. From there, there will be exactly one unstable mode, which we can follow downhill to the two connecting minima. Having moved slightly from our original path, we might find that actually the connected minima are not the ones we expected. That's okay. We're building up a database of stationary points. We need to reconnect these structures by reapplying the elastic band methods. Because the gap is getting smaller and smaller, this eventually does converge. We end up with at least one path that connects two different structures. It's important, given the very large number of structures out there, that we have relevant minima and transition states, which are generally going to be the low-lying ones. Those are the ones that are going to dominate both kinetically and thermodynamically. We need to augment the database by trying to bias a search towards low-lying structures. This can be done with another general method called discrete path sampling, which is the energy landscape equivalent of a method that you might be more familiar with, which is the dynamic path sampling method of David Chandler. David Chandler's method involves shooting a trajectory from one structure to another, perturbing it slightly, and then exploring the ensemble of trajectories. The discrete path, the energy landscape equivalent of that, is to take a sequence of minima and transition states that connect an initial structure with a final structure, and try to find pathways that are as fast as possible. This can be done, first of all, by trying to assess the flux between two states by placing the intervening states in the steady state. It turns out that the equations can be solved for that if you have, at least in the approximate formula, for overcoming a single barrier. We use a stripped-back version of transition state theory, which involves the exponential of the barrier height, and the ratio of the orders of the point groups of the transition state to the minimum. That's very important. It can be a big factor in the flux. We assume that the dynamics of Markovian, in the sense that the system spends long enough in any one minimum that it can forget where it was before deciding where to go next. Using that, we can compare different pathways. For example, we can try and reduce the barriers between where the barrier is high by trying to find lower transition states by doing localised searches. We can also try and shortcut the pathway by finding a single connection that bypasses a two-step correction or greater. By biasing the system towards paths with high rates, we can build up a database of relevant structures. Here's an example. What I'm going to show you is the evolution of the potential energy landscape with increasing dipole moment for the 13-atom cluster. Starting off with the case where the dipole moment is zero, which is just the pure linear Jones case, we see a classic funnel-shaped energy landscape, which has these branches, short branches, coming off the side and leading down to the global minimum. This structure readily relaxes. If we just turn on the dipole slightly, all that happens is that each of these local minima has to be decorated with a set of dipoles. In some cases, that can be done in more than one way. What we see is a slight fraying of the ends of the branches, but basically the same graph, nothing has happened. Here's a bit further on when the dipole moment has been increased. This is close to a crossover point between global minima. This is where this structure, the scented hexagonal anti-prism, has taken over from the icosahedron. Although those look like quite different structures, you can see that the barrier between them on the shortest path is not very high in comparison to the pairwise energy. In fact, it can be done in one step. There's just one transition state that connects these structures, and the path is illustrated here. If we start off with the rings, you can think of the ring structure as a set of edge-sharing triangles like this. If I twist the two halves of the structure with respect to each other, they can clasp to make the icosahedron. On the right-hand side, we can see what's happening to the dipoles. Initially, we have two closed rings of dipoles. Those in the process of rearranging connect up into a single open-ended chain, but they split up again into three loops to make the icosahedron. Here's the energy profile for that rearrangement. The coordinate here is the integrated path length along the very curved path through the Euclidean space of the centres of mass of the particles, not including the orientational coordinates. You can see there is indeed one transition state, a bit of a shoulder here, but there's only one true barrier. That's a surprisingly co-operative conversion of two quite different structures. Further along, increasing the dipole moment further, you can begin to see how the energy landscape can have different regions of configuration space with the different structures. At some points, each of these will be the global minimum, but they exist on the energy landscape as metastable states, even when they're not the lowest structure. You can see that they're clustered together in regions connected by relatively large barriers, but each one will have a set of defective structures which resembles it more closely. You can imagine this tree evolving as we twist the one parameter that we've got, the dipole moment strength. As the landscape evolves, minima may appear and disappear via catastrophes in the surface. In the meantime, the metastable structures can be traced. In the process of doing this, we find a set of characteristic rearrangement mechanisms which are common. They're concerted even when they're quite local. For example, this diamond square diamond rearrangement, where you have to imagine four particles at the vertices. The diamond squishes into a square and then squishes out the other way. This is a very common rearrangement mechanism in all sorts of other molecular clusters, for example, boranes and carboranes. In this context, they have an additional importance because they allow the network, the topology, to be rewired. The cyan lines here show aligned chains. You have to imagine the rest of the structure continuing like this and like this. What this rearrangement has done is to connect what previously the particles on the long diagonal of the diamond. That mechanism is the one at the heart of the rearrangements that I showed you in the previous example, the icosahedron. The twisting involves changing diamonds into squares and back to diamonds. Here's another common mechanism, a butterfly that can fold up into a tetrahedron, again rewiring the topology from these edges to those edges. It's also possible for particles to be exchanged between rings in a concerted budding mechanism. This particle is going to be transferred into the yellow ring. Those all occur frequently. Here's an example of it. This is the smallest cluster that exhibits a knot, the 21 particle cluster. Here's a trefoil. I've chosen a dipole moment where it's competing very closely with the stacked rings, the unlink structure. You can see very clearly here the two regions of configuration space. We've got a double funnel energy landscape explicitly shown here. Again, you can see why simulated annealing would really run aground with this sort of system. The pathway between these two structures is a multiple step. Here are some snapshots along the way. Starting from the stacked rings, the unlink, the first thing that happens is they join up to make an unknot. That then rearranges to make a hopflink. Particles are exchanged and it reconfigures and finally we get the trefoil. Here's an animation of that process along the minimum energy path that we found. Sorry about the changing colour scheme. When there are two colours, there are two components. When the colour changes smoothly, it's one component. Here we go. First of all, the unlink, unknot, hopflink, exchange of particles, bit of reconfiguration, get everything lined up and finally. Here it is again with the particles shown in a slightly more space filling way. You can see the dipoles moving around. Unknot, unlink, sorry, hopflink, exchange of particles. First of all, here's the energy profile for that reaction. It's quite bumpy. The reaction coordinate is this integrated path length along the very curvilinear path through configuration space. It looks a little bit bumpy. I haven't included the orientational degrees of freedom in the definition of the path length. I think it's fair to say that probably even advanced techniques like transition path sampling would really struggle with a rough path like this. I have to emphasise we did not know the reaction coordinate in advance. We didn't have to know what to plot here. The energy landscape method bypasses that whole problem. That was Lenard Jones plus point dipole. Of course, you can go to higher multiples. You can do Lenard Jones plus quadrupole. There are various ways to draw a quadrupole, but they all like to form two-dimensional sheets. Now, instead of a competition between compact structures and linear structures, we have a competition between compact structures, three-dimensional structures, if you like, and two-dimensional structures. Of course, we don't see knots, but we do see some other exotic structures with some really weird point groups, like S6 as a point group. It's pretty rare. The equivalent of closing up a chain in the two-dimensional system is to close up a sheet either into a tube or into a ball. Other topological questions become interesting because, of course, you can't tile a sphere just with squares. There have to be topological defects. That kind of topological consideration comes into play there. Let's mention a couple of other projects that are underway in Durham. We are lucky enough to have a Leverhulm Trust-funded programme grant on knots, so there are a lot of people thinking about knots in Durham. I joined this as a sort of imposter from outside after arriving in Durham. But I'm working with Chris Pryor in the maths department who is developing methods to extract information on protein structure from X-ray crystallography and is using knot fingerprints as one ingredient for doing that. Then, with a biophysicist and a mathematician, I'm also looking at bonding networks where it's possible to have vertices that are connected by more than just two edges. You can't use the ordinary Jones polynomial and so on, but there is a whole reasonably well-developed mathematics of topological invariance for graphs as well. We're exploring that in the context of chemical bombs. Unfortunately, neither of those projects is sufficiently advanced to talk about here, but I hope at some point in the future I'll have the chance to present those. One thing we have managed to get out is a tutorial review on the application of knot theory to chemistry and has just come out in chemical society reviews. We've taken a rather different cross-section of knot-related topics from the reviews that are already out there. If you have new students starting in this area, they might find it a useful introduction. We were restricted to 50 references, which makes life pretty difficult, but actually a very large number of people in this audience are mentioned in this short review. Just quickly to summarise, I've shown you a system where, perhaps surprisingly, a knot can be the energetically optimal state of a cluster, and this arises from a compromise between competing effects, and the chain that hosts the knot has to form at the same time as the topology itself. We see some pretty complex knots. The torus knots prevail and other high symmetry structures, but there's a great variety of knots out there, as David was alluding to, that we don't see. We've only got one parameter to tune here, so we're doing pretty well by tuning just the dipole moment strength. We see a great variety of topological structures, but I think it's a very interesting question which knots are designable and which are not, and maybe even might tell us a bit more about this afternoon. We've taken a global view of the energy landscape using a toolbox that we had to develop for the purpose, and we've identified some elemental rearrangement mechanisms, and I suppose the whole thing is only possible because of the directionality of the interactions. You can think of a dipolar particle as a sphere that has two patches, a plus patch and a minus patch, but it just goes to show the complexity of structures that can emerge from anisotropic interactions of that sort, and then I'll point towards Ivan's work. I think he'll tell us a bit about patchy particles and knot formation later in the day, but that's it from me. I'll stop there and be happy to take any questions. Thank you.