 Yn ymgyrch, y cynodd y dyfodol yn creuio cyfnodol. Felly, dwi wneud oed yn EMS, sydd wedi'i gweithio'n cyffredinol y cyfnodd y gwaith ymgyrchau a'r prôl hyn cymryd a'r prôl cyfnodd yw ymgyrcheryniadau a'r modell yw'r cyfnodd yw'r cyfnodd, yn y fwy ffordd iawn. Dyfodol yw e'n llwyddo i'r cyfrunio, Be oeddo'r bwg, dyma'r eich gofio'n tractorydd iawn i'r brwythion o sefydliadau arall y teulu stwyth yn ysgol i wefio'r ffordd. Mae'n dipyn llawer ar yr elef, ac mae o'r gweithio'n gweithio a'r sefydliadau yn ysgol'u oeddo'r bwg. Mae'n gweithio'n gweithio'n gweithio'n gweithio a'r ysgol i chi'r gweld. Mae'n gweithio'n gweithio. Cyllidau a'r amser o'r gwbl sydd o'i gweithio'r pryd maen nhw, oherwydd i ammwyntio'r cynlluniau trefnig a'r amser o'r hyn o'r hyn ymwysig o ffwrdd ymlaen. Rwy'n ddweud i ddweud i gweithio'r ddechrau. Rwy'n ddweud i'n ddweud i'r ddweud o'u ddweud. Rwy'n ddweud i'n ddweud i'r ddweud. Rwy'n ddweud i'r ddweud i'r ddweud, i'r ddweud i'r ddweud i'r ddweud, you an expert in art if I give you an expert in literature could they tell the difference between something a computer has produced and something a human has produced so can the computer step in and take the place of Mozart, Bach or Beethoven or Shakespeare or Picasso that sort of a goal we are working towards and we can see that we are nowhere near that a hwnna os wedi cael ei gael y mae'n addustus cyffredinol. Can computers be creative? Wel, we can tell computers what to do. We can tell them to add two numbers. We can write simple computer programmes to do this A human can do this too. Computers can also do much more complicated calculations. This is a simple C programme to calculate the prime factors of the number M. ond y cwmfŵr canol gwybod arall yn gweithio'r cyllidol. A, wrth gwrs, dyna'n cael ei wneud y cwmfŵr cwmwŵr, a'r cyflasio'r cwmwŵr yn gweithio'r cyflasio'r cyllidol. Felly, mae'r gwybod ar y gwybod ychydigawr o'r algorythmau. Mae'n gwybod i'r gwybod i'r gwybod i'r gwybod ychydigawr, a'r cymdredigawr sy'n gwybod yn i gweithio. Ond yn oed yn ffaluio ddefnyddol? Mae'n gwybod yn prosodytio, Ac mae'n gwybod yn rhan, Bydd y cychwyn cyngor yn ddyddol y algorythmau a fit yw'n ymgol eich bod yn y cyflasio'r cwmwŵr, hefyd yn drwsgol y cyllidol, ac mae'n cyflos, bron o'r gwybod i'r gwybod ddefnyddol. Felly, ac mae'r argyrchu? Mae cyllid yn gyda'r cyllidol, yw yng Ngwyloedd Cymru? Felly, mae hwnnw ddim yn ddigwydd, roedd yng Ngwyloedd Cymru. Roedd yng Ngwyloedd Cymru'r ffordd yma bod yw'n gobeithio i ddweud yn cyfer y cwerth bofl. Felly, ydw i'n ddewch y gallwn trou yng Ngwyloedd Cymru. Felly, mae'n ddiddordeb am fydden nhwm y gallu enw i'r cwmhutur ac mae'n ddiddordeb ar y gyferio cwmhysg. Yn y gyd gan hynw, a oherwydd, a wnaeth yn fwy o wneud oherwydd yna ydych. Mae'r procesau yn yw Isabella. Mae, yn y computeo, mae'r eich operatorion rhanodd mewn gyfwyrdd. Mae'r holl yn ni'n meddwl â'r cymdeithas gyda'r ffordd y cyfwyrdd ymddwyr. yn g Silenonpeth, mae'r roedd y plwyaf hwyl yn ysgyfrwyng, mae'n gweithio'r llefydd sy'n tu'r holl yn niingol. Mae'r holl yn ni'n gyfwyrdd yn cael ei ddweud. Oni d Waterloo wedi'i cael ei bod yn wahanol ar y Ochor. yn y Llyfrgellau, ac yn y Llyfrgellau Gyllidol, y dŵel mae'n dŵel yn eu sgifatau, ac mae'n dŵel yn ei dŵel yn digwydd iaeth ac mae'n dŵel yn dŵel yn ei dŵel yn dŵel yn ei dŵel yn dŵel yn eu dŵel mae'n dŵel yn cael ei dŵel yn dŵel yn digwydd, a mae'n dŵel yn digwydd i dd completion i ddana yma, Ymdd詩bl ar gwybod, mae'n gyfod o'r beg. Fy enw i'r gwleidio'r gygahyrd hon o'r oedd o'r gwiriedig yma ychydig yw 200 a 300. Bydd o'r newyddyn gwybod. Mae'r gwybod moderniaid cymdeithasol o'r gwiriedig. Mae'n gwiriedig, mae'n gwybod yn spokeni am yr analog ac yn proses. Mae ydych chi'n gwybod yn gweithio'r gweithio'r gwybod, mae'r gwybod yn maes ymbledig, gyda i'ch gwybod ar gyfer gymunedol hyn fathers, i ddweud yn grwyb 받ol. A os ydych chi'n cael eu hunain cyffinwydol ac yn ddiddordeb yn gweld. Ar hwnnw, y'r computerau yn deall a'r ddiddordeb yn cael eu hwnnw. Mae'n cyffinwydol, yn eu hwnnw, yn ddiddordeb yw'r cyffinwydol, a'r hunain cyffinwydol yn gweithio dyrfaenolas cyrgyntau dügenen, a'r hunain cyffinwydol yn y ddiddordeb ar y ddiddordeb yw 10-11 ychydig yn y brain, yn cyffinwydol i ymddydd a'r hunain cyffinwydol, It makes your processor have a master blueprint from which they fabricate all of their processors. In the brain there's no such master blueprint and the connectivity can differ from one person to the next. Computation in an electronic computer is highly deterministic where it's in a brain Technologies, highly randomised, and that's very important. So, why is the problem of creativity difficult? Well, one aspect is this stochasticity and probabilistic computation, so Von Neumann suggested that these random processes in the brain, random inter-arrival times of pulses on neurons, mae'r hyffordd hyffordd yn ysgrifennu cymdeithasol. One hypothesis for what might be going on with this probabilistic computation is humans are, whenever they are faced with a new piece of data, whenever I am looking at a new image, something in my brain is generating a large number of hypotheses for what this new image could be. And then I am sort of entertaining all of these different hypotheses, generating them almost at random and exploring a very large search space of possible hypotheses and gradually narrowing down what it could be until I arrive at a conclusion. The conclusion might not always be correct. Optical illusions exist. It's easy to fool my brain into thinking that something's true that's actually not. That might be a byproduct of this large search process perhaps. We've said that computers are very good at tasks for which we have a large degree of cognitive penetrance. If I can describe how to do a task, I can tell you exactly how to integrate a function like X squared. Add one to the power divided by the new power. A computer can do that very efficiently. Computers are very good at evaluating large integrals, much better than humans in fact. I can't describe how I might write down a tune that comes into my head. I can't describe how I might write a poem to carry on from the previous talk. These are tasks that the computer is much less good at. Before we go on, it's worth saying as well that maybe the human thought process isn't the only way of being creative. Maybe there are better ways of being creative than the process that humans go through. The human creative process is kind of the goal of this area. If we start there, look at the system that we have that does exactly what we want it to, and maybe get some inspiration from it. Perhaps that's a good place to start. How are we going to start? How can we go about emulating human thought? The overarching principle here is that we want to keep things as general as possible until as late as possible. The human brain is very good at a huge variety of tasks. It's good at composing music. It's good at writing literature. It's good at painting artistic pictures. We don't want to specialise into one application domain too early. We're going to design a general algorithm that can sort inputs into classes if you like. It can do object recognition. If I give you an image of an object, it can tell you what that image is. If I give you a piece of music, it can tell you what each note in the piece is. Then I'm going to try and invert this process. Do the opposite. If I give you a description of an object, can you paint me a picture of that object? If I give you a note in a musical piece, can you generate that note? If I give you a genre of music, can you generate a sample in that genre of music? There are a few different approaches to how to do that. It largely depends on how we designed the algorithm in the first step. Let's go on and see a few examples of how this might work. We'll use neural networks as our starting point. Neural networks by no means the only algorithm that we can use to be creative. Neural networks, people get excited about them because they've had very good results in a large number of applications. By no means the only way to move forward in this area. They're just an algorithm like any other. They're a very good algorithm at doing classification and some other tasks. They're not the only way of doing things. We're going to take some inspiration from how the human brain works in that we're going to exploit the highly connected aspect of it. Computers, we've said the circuitry in them isn't highly connected but we can simulate this high degree of connectivity and software at a higher level if you like by implementing neural networks as an algorithm on top of our traditional computing hardware. Different kinds of neural networks are suited to different tasks so we'll look at some simple neural network architectures and then we'll extend them so that adding some knowledge from application areas in order to make them more effective in those applications. Convolutional neural nets for image processing, recurrent neural nets for looking at sequences. Fundamentally, a neural network is a supervised learning algorithm. That means we have to give it a number of training examples. We tell it what the correct answer is to a number of classification problems. From that, it learns how to do further classifications. We go through this training phase. We sit in the lab and wait for our neural networks to train. It might take many days, many weeks on modern neural networks like WaveNet many months. Once it's trained, we've got a very efficient process for classifying new inputs. The training phase might take a while but then I can put my neural network out in the open and just give it a piece of data and it will classify that data as needed. Neural networks are very good when there's lots of training data available. This is a general pattern in lots of applications and if we don't have lots of training data, then perhaps neural networks aren't the right approach or perhaps we need to go through some process to try and get more training data and we'll look at some ways we can do that. The simplest building block, if you like, of a neural network and sort of a precursor on their own two neural networks is the perceptron. People try to build these in hardware back in the 60s and 70s. Now we tend to use more than one of them and connect them together into a large neural network. The key idea, can you see if I point with my mouse, is that showing up on the screen? The idea is I have a load of data items. This is my vector x, if you like. This describes features in my input. Then I've got some linear operation that does some preprocessing on this data input. It multiplies each input by an associated weight and adds them together. Then I've got this nonlinear decision-making operation that compares the result of this dot product to some threshold and then outputs a decision. This function might output one if the result of this linear operation is above the threshold or zero otherwise. The goal of the perceptron during the training phase is to learn this function F. Once I've learned this function F from a series of labelled examples, I can then go and make decisions using it. We've got the two phases, training and then application. I'll just work through a really simple example quickly so we see just get an intuition for how this works. My favourite example is the pet-aware household intruder detection system. We've got inputs of infrared sensors and weight pads. Give me the height, weight, speed and colour of the fur or clothes of the human or pet. The goal is that it should set off an alarm if a human is trying to enter a property, but it should ignore pets. It should output whether the person picked up by the intrusion detection system is a pet or a human. I need to start off with a load of labelled examples. I have a number of examples from adults and a number of examples from pets. Then I put all this into the training algorithm. In practice, if you're doing AI, this is just an API call. There are really, really good APIs for doing machine learning, TensorFlow, Keras, and we don't actually have to understand how this training algorithm works. It might be interesting to know how it works, but there are lots of people out in the real world using neural nets who just treat this training process as an API call, so that's what we'll do here. Then the output from this training algorithm is a series of weights that I can then use to classify further examples. Those weights go on the arcs here and then I can feed in new examples, new examples with height, weight, speed, and fur colour. Then I've got this system that will tell me whether the inputs correspond to a pet or a human. A slightly more abstract example next, just to develop the formalism. I've got a number of examples in this 2D space, and I want to separate the orange examples from the blue examples. This element here represents my perceptron. If I go through and start the training process, then you can see it's very quickly learnt to draw a line between the orange and the blue examples. It's very quickly learnt to separate the orange examples from the blue examples. If I make the dataset more complicated, so perhaps I add some more regions to it, then the perceptron is going to fail to learn how to separate these two inputs. You can see it's really struggling to separate the orange from the blue. Let's try and develop something that will separate the orange from the blue in that example. The perceptron only works if the inputs are linearly separable, and to get around this, we'll use connectionism and combine more than one perceptron together. This is what we might start to actually call on your own network. It's the multi-layer perceptron. Each one of these circles is itself a perceptron, so each one of these has the linear combination of inputs and the non-linear decision-making process. I've got my input layer, which is exactly the same as before the Xs in the previous example, then one or more hidden layers, and then some output layer to congregate the results and do the classification. Let's just have a quick look at how this works if I add some more neurons to my network. I might build up something like this, and now I can train the network, and the multi-layer perceptron manages to successfully separate the two classes. If I make the data even more complicated, then the network might take longer to train, or it might fail to do the classification at all. With this very complicated spiral data set, you can see that even this larger network is struggling with quite a long training time. It's still failing, so I might have to add even more neurons, even more layers to my network, and I might eventually get to something that works if I left this training for long enough. You can see it's very slowly working towards a reasonable result. Right, so how to apply this. Computational musicology aims to answer questions like given a melody, can the computer generate a reasonable harmonisation for that melody? Can the computer compose new melodies from scratch? Can we actually learn about music from the process of going through, can we learn about music itself from artificial intelligence? So can we analyse the chord structure in a piece in a way that it would just be too tedious for a human to go through and annotate all these chords? Can we then compare genres of music using these metrics that we've extracted from the piece? So let's have a look at how we might go about doing this, and we've actually got all the tools that we need already with the multi-lipisaptron that we saw before. We can do a pretty good job. So what I'm going to do is address the problem of harmonisation. So given a melody, given note 1, note 2, note 3, all the way through my piece, can I produce the, and the harmony up to the current point in the piece? Can I produce the harmony for the current note, if you like? Can I produce a chord that would sound reasonable with the current note in the melody? And yep, I can train my neural net on a series of examples of notes and chords and previous harmonies, and I can slide this neural net over the piece and gradually build up the harmony note by note as I go through the piece. And this works okay, but it has a number of fundamental issues. So one of them is how big to make the window? How much context do I need? How much, how far back in the piece do I need to go in order to produce the current chord to go with the melody? And really we want the network itself to learn this, and we don't want to have to fix the context because presumably at the beginning of the piece we want quite a small context. We might not have any previous notes at all, and then as we go through the piece we might want to learn more about that we might want to use more data here. So there's also this trade-off between how much context do we use from the current example, the current sequence, and knowledge gained from the entire training sequence. The sliding window approach also restricts us to one output per note in the input sequence. So this might be fine for harmony, but if we want to compose a melody, perhaps we don't want to have to output one note every beat exactly. We might want to produce multiple notes per beat, we might want to be able to produce a variable number of notes per beat, and sure you can hack up the sliding window approach to sort of do these things, but it would be nice, it's not what we're doing as humans, so it would be nice to try and find a better solution. And the answer is the recurrent neural net. So this maintains some hidden state as it goes through the piece. So this is really the structure of the recurrent neural net, we've still got an input and an output, and if you unfold this loop then you get something that looks like this, the input at the current time, input at the previous time, input at the next time, and associated outputs. And it can maintain this state as the network goes through time, and you can train this network through time. And it's got this unit known as, well many of these units known as long short term memory. Short term memory refers to the context from the current piece, and the long term memory refers to context from the entire training set, and using this sort of more complicated series of operations. So instead of just having one activation function, we've got a nonlinear operator in our perceptron, we've got a few of them all working together to control how much knowledge comes from the previous notes in the sequence, and how much knowledge comes from the training set. And we can use this to compose music, harmonise music, solve pretty much all of the tasks we've talked about. So here's an example of a paper from a few years ago using a recurrent neural network to compose melody. And this network is trained on a corpus of classical piano music. It uses a symbolic representation of the music, it's trained on data corresponding to notes, not audio files, not wire files, it's trained on MIDI files. And the result sounds something like this. So we can see from that that it's picked up something about what humans find interesting in music. It's detected, it's learnt something about melody, it's learnt something about what we sound, what we find pleasing to listen to. But it did get stuck in a repeated chord structure at the end there. So it's getting stuck in the loops and it hasn't learnt enough to be able to get out of those loops. So another system is Bach Bot. This is trained on a corpus of Bach Carattals, and we'll have a listen to this. That's really picked up some of the key aspects of the corpus, the chord structure, the melody, and it's composed that original piece of music from scratch. And Bach Bot can also harmonise existing melodies, as well as doing the composition. A more recent example, and there was a talk on stage B just earlier on WaveNet, which is Google's What Drives Google Voice? It drives Google's voice generation system. And there have also been some successful examples of using WaveNet to compose music. But instead of using a symbolic MIDI representation, it uses audio files themselves. So it detects something about what we're actually hearing, rather than our symbolic representation of the music. One limitation in computational creativity of music is the quantity of training data available. And this is really the thing holding composition up at the moment. In any one corpus, you have an order of magnitude too few chord progressions, really, to extract the essence of that corpus. But another area that doesn't have this problem is computer vision and image processing. And here, we have orders of magnitude more training data and get a lot more successful results as a result. We do need a slightly different structure of neural net. We could use the multiple episode exactly, but we'd have a lot of neurons, and we wouldn't exploit anything that we know about images. So we might be able to recognise a feature regardless of where it's positioned in an image. And so we put some of this knowledge into the structure of our neural net, so we're able to achieve this. So one system that uses this is Google's DeepDream. And the idea is they train a network that can recognise features in an image, and then they sort of use the network in reverse. So they'll train the network to recognise features, and then put in a new input image, and then change the input image. So modify the image at the input until it gets some good response as an output class. So if you want the image to have more face-like features in it, for example, you'd find the neuron that outputs face-like features, and then modify the input until the neuron that recognises faces has a high response. And this generates these sort of trippy psychedelic images. Here it's got a wavy texture. This is my input image. This is the output after 20 iterations of this process of deep dreaming. So it's done something artistic to the image. Here it is where the original dreaming network was trained on a different set of images, and it's output some higher level features into the image. Here's another image. I'm not sure if you can see this, but it's recognised as a branch-like structure and replaced the branch with some snake-like feature in the output. I don't know if that shows up in the projector or not, but it's doing much more than superimposing images. It's really learning something about the structure and modifying the output in an interesting way in order to respond to that structure. And some people have left deep dream running for much longer on the input images than I have and come up with these really quite artistically impressive images. The final thing I want to briefly touch on is style transfer, which is another very general approach which could be applied to any area, music, images, literature, that takes one example of a content image, poem, whatever, and sort of a style that you want to put into that image. And in quite a novel and general way it combines these features, and you can try this yourself if you grab that URL from the recording or whatever, but I can take the codes all in Python and accessible. I can take my input image of a squirrel and some style that I want to try and redraw this input image in the style of, and I get this quite neat output. Here it is with a different style. Here's evil squirrel getting back at me for using him as an example. And yeah, these results quite artistically pleasing, quite effective. I can do the same thing with a different content image, but the same style images, and again I get quite an artistically pleasing image each time. People have also done this for text to invert sentiment, so turn, I would recommend, find another place into, I would recommend this place again, going from a negative to positive sentiment, and in the reverse direction going from positive to negative, really good food that is fast and healthy to really bland and bad and terrible. So again, it's not perfect English, but it's picked up the, it's identified the key features here. So in conclusion, there's still a long way to go, but we've come up with some things that we find pleasing to look at or listen to. Ultimately, is the computer being creative, or sort of is the neural network the new 21st century paintbrush? Is this a tool, another tool for humans to express their creativity? Is the human who designs the algorithm creative or is the computer being creative itself? And perhaps that's an open question to discuss, but at least we've got some insights into possible algorithms and techniques to carry on from here. So thank you very much for listening, and we should finish there to let them set up for the video, but I'll be around if anyone has any questions or feel free to drop me an email. So thank you very much for listening. So that was computational creativity with Matthew Ireland, and he'll be stepping outside for questions if you have any. And whether or not you have any questions, something that you can do that will help make EMF even better is volunteering. You can go up to the volunteer tent or you can sign up online. EMF camp is put together entirely by volunteers, so you and me and everyone else who's here is how we make this all run. If you volunteer for three hours, you get food. If you volunteer for less than three hours, you get our infinite gratitude and appreciation.