 So, really, this equation here is just these three lines of code. This is a simple neural network. This is done in LabVIEW, but I used LabVIEW for many years, so that's what I'm used to see. Interestingly enough, popping it into GPT-4, it's able to explain what is it we are looking at. So, it says the image you have uploaded appears to be schematic representation of a simple neural network. It's a neuron model used in machine-like artificial data. Yes, we know that. It shows the basic components of a neuron, including inputs, weights. Just can read text. That's fine. Bias activation function, in this case, a sigmoid function. So, I must just read that title in the output. By the way, yes, in LabVIEW, by default, a function like that will not have a title. So, go figure what it actually means. That one is also not absolutely clear, but all the rest is very straightforward. Not sure why not just use simple math equation instead. So, it's giving a brief explanation of each part. You have the inputs to the neural network, the weights. Yes, it's explaining, giving a general explanation what weights are. A bias, again, to the intercept in the linear equation, and it's used to adjust the output along with the weighted sum, okay, and then the sigmoid. So, this is a type of activation function that is commonly used in neural networks to introduce non-linearity to the model. It maps the input values to an output range between zero and one, and the output is the final output of the neuron after the weighted inputs have been summed with the bias and passed through the sigmoid activation function. And it does recognize that this is actually LabVIEW, because it says the layout and visual style suggests it might be created with a specific software for educational or simulation purposes, possibly something like LabVIEW. Okay, it's not entirely sure, but yes, it is LabVIEW, which is often used for creating simulations of systems and control models. Now, if you used LabVIEW a lot, you will be familiar with this, yes, but most people have no clue what LabVIEW even is. So, can GPT4 translate this LabVIEW back end into maths, which is way more common, and yes, surprisingly, well, not surprisingly, if it's surprising or not, let me know in the comments, but apparently, yes, it can. This seems to be a legit equation, looks much neater, much nicer, quicker, one line. Yes, you know, you need to know what each component is, but there is a quick explanation, how does the output, this one is represented sigmoid activation function, and there's a separate definition of a sigmoid activation function, which is a standard textbook equation of a sigmoid function. This bit represents the weights corresponding to each input, that bit represents the input itself, and plus the bias is the bias, and N is the number of inputs. So pretty straightforward, I'm not sure if you can get this much information from the above LabVIEW back end, not that great, and gives you a summary as well. It also says that the sum inside the sigmoid function is a thought product, sounds legit, of the weights and inputs, which when added to the bias term, yields the neurons total input, the sigmoid function then transforms this input into the output between zero and one, which is the final output of the neuron. So that sounds great. One note though, that sigmoid function activation has all sorts of problems with it, so if there are many other different models that are worth looking at GPT-4 is able to translate this into Python, and again, I used LabVIEW for many, many years, but this Python looks pretty straightforward. It has the basic equation for the sigmoid. GPT-4 is also able to convert the maths into Python, which even though I've been using LabVIEW for years, Python code is much cleaner, more straightforward. It actually shows you what's inside this box here, what's inside the sigmoid function as the thought product for the weighted sum plus the bias and returns the sigmoid function on the weighted sum. Then you can have your inputs, weights. This defined as whatever you want, set your bias and off you go, you have your output. So really, this equation here is just these three lines of code. This bit here doesn't even show you what's inside the sigmoid function, which could be, again, another very simple equation for all this bit. Interestingly, when asking GPT-4 to generate LabVIEW code again, it generates this wicked image. That means absolutely nothing unless it means something to someone else, not me. Let me know in the comments. But yeah, so, so GPT was able to translate this image into simple maths, which is the gold standard, the absolute common, most common thing that most people are familiar with. There's nothing overly complicated. This equation, sum dot product and sigmoid activation function that has a separate equation in GPT-4 is able to translate this question into Python in a very straightforward way. That's probably because there's many examples on GitHub and other websites. Not able to translate back into LabVIEW. I was kind of surprised it was even able to recognize LabVIEW code, which was quite, quite interesting. So this, this example must be sitting somewhere on the web on which the image models were trained as well. So there you go. Might eventually turn something similar into a web application.