 All right, welcome. Welcome, everyone. That's something that I started to work on in the last few months, and it's really a series of lessons that I learned while trying to make my website talk in different ways and using different approaches. And so today, I hope that I will be able to transfer my lesson learned. Of course, feel free to interact as much as possible. This is really an open session. So I have learned things by doing experiments. I'm sure that in the audience, we have also other experience worth sharing. So I'm here for that. That's usually how we get the best out of these workshops. I, L Publisher, create a sustainable organic growth using semantic web technologies. And this is a little bit of the typical lift that the technology that we created brings into a website. So you can see here the results of the organic traffic compared to 160 other websites that deal with the travel industry in Austria. So you can see that we started way below, a little bit below the average. And then after three months, we started to see the lift of creating data that computers can use for creating new services. I also love, and that's probably one of the reasons I'm here, I love to experiment in new ways to interact with web content. And I started to use artificial intelligence in the last five years of my experience on the web. These are some of the metrics when I started to work on semantic web technologies back in 2008, 2009. It was still very hard to justify the results, the return of the investment of creating the infrastructure for publishing data. And now in 2018, finally, we have enough data, enough metrics that we can prove that more metadata, more content structured helps search changing, smart agent, personal assistant, and why not chatbot provide more traffic to your site. So these are some of the metrics that I was able to measure on a design blog from Poland. And this is a research that I published last year and presented in a conference in Amsterdam. My name is Andrei Volpinia, as I was introduced. I am the CEO of a company called WordLift that uses AI to automate structured data markup. I've been doing web and working as an entrepreneur on the web for the last 20 years, so I've been around for quite a long time. And you can ask these to Google, and Google will respond. And actually, you can also ask to Bing something more personal, like my mom and dad. So that's also been experimenting with Knowledge Graph across these many years of work in the semantic web world. And so there is a lot of things that have changed in these last few years. And it's really the ground basis of what we are going to talk about today. So mainly the worship is divided in three sections. So the first section is a little bit of introduction on linguistic AI. So there's a lot of talk about artificial intelligence these days. I am focused on what is called linguistic AI, which is the area of the technology that covers the structuring of content and the organization of knowledge. And smart content structure data are basically the buyout products of these activities, of this development in the AI world. I'm particularly focused for publishers, so bloggers and news and media editors. But of course, we are also starting to work with shop owners. But this presentation is more for people that have editorial content. And want to create new ways of interaction with this content. Then the second section is about, I call it conversational design 101. I learned it the hard way by making experiments. There is a lot of literature these days, which is very valuable for conversational design. I'm just going to go through the main mistakes that we made with these experiments. And then also, we're going to talk a little bit about voice search, which is the way that these technology are becoming more and more common among our users. Because of course, yes, there are 500 millions Google assistant-powered devices in the world today. But most of the traffic that we see is coming from web users using their voice for making queries to the search engine. That's really what voice search is. I'm going to introduce a little bit the metrics that we can measure when we create a conversational interface and when we start seeing traffic coming from voice devices into our website. And then I'm going to show you a little bit of the back end of a Google action. Google action is an application that you can create for providing interaction through the Google Assistant. So and then we're just going to wrap it up. Yes, you can ask me as many questions as you want. You just raise your hands. I mean, we're going to leave it as freestyle as possible. And of course, if something is not clear, you also please stop me and ask. You can go to this website, use the code 174709. I have prepared just a few introductory questions just to know each other a little bit. I wish we could give the space to everyone to present and introduce himself or herself. But that's like the way we're going to do it. So if you go there and you provide the answers, then we can start looking at it. And that's so we can come back there. All right, so you're ready rolling. So let me get this into the screen. Yeah, yeah, yeah, there you go. That's interesting. Let me understand a little bit more about your background. And OK, Content Editor and Publisher, Digital Agency, of course, Developers. That's what we expect, of course, of the work camp Europe. Web designers, entrepreneurs, start-uppers, anything else. All right, all right. Content editors on the rise. Developers are the biggest community, of course. Wow, wow. That's good, that's good. All right, as people continue with these, I will also have put another simple question, which is a little bit of the background that you have so that I can understand more how deep I can get. And search-changing optimization is a big topic. It's actually what I realized back in 2011 when the search-changing decided to agree on a standard called schema.org to design a system for describing content. And so after many years in the Semantic Web, I was working on my own CMS. At that time, I was at all involved in the WordPress community. I'm fairly new in this community. I had my own agency, and we had our own CMS, like many agencies do, especially back in the days. You're talking about 2006, 2007. And then we started to look at ways of organizing this content because we were managing the website for the Italian parliament. So we had a lot of web pages. We had a lot of laws. And we had a lot of millions of users coming every day to this website. And we needed to organize the content properly. So I started to investigate more on Semantic technologies back in 2008, 2009, even a little bit before than that. But then it all turned into SEO for me when this technology became more and more connected with the way that search-changing interprets the human language. All right, so yes, I think we have a terrific audience with all the knowledge that we need to move forward. And probably there is a lot of things that I can learn from you, so I hope that we can get this rolling and we can get back here. All right, so one very basic language is that any AI system needs reliable data. Whatever type of machine learning approach you're using, whatever type of neural networks you're trying to configure, you are going to need data. And you're going to need a lot of data. And when this data becomes semantically structured, it gets way easier to build a system that works. And that's exactly what search-changing are doing these days. You know, machine learning has been introduced heavily with the arrival of RankBrain in Google. And there is a lot of different mechanism that nowadays are at play when we run a query on a search changing like Google or Bing. The reality is that these systems desperately need data because the way that they work, it's by training models using structured data. So when we talk about AI, usually we talk about systems that have to replicate what the human brain does. A very simple definition of AI is a system that replicates the functions of the human brain. There are different functions and different cognitive capabilities that the brain achieves. So these are the five areas where we see a lot of the research is going. And these are really also five areas in which we have been studying the human brains for many, many years. So perception, understanding a visual object, motion and manipulation, understanding where the car should go, natural language, understanding what is the content of an article, memory and emotion, deciding what is the mood that these user has when it's writing these comments. Reasoning and planning, trying to understand if this statement is truth or not, that requires reasoning. And you will see different technologies in the AI world. Right now, we talk about artificial intelligence, but this is really an umbrella term for something way more diversified. It's a universe of different fields of applications. Now, our field of application is natural language processing, which is a specific area within the linguistic AI, so-called. Reasoning and planning, you find the big platforms like Einstein or IBM Watson, where you can actually have different areas combined, and you can run queries on top of data that you put. You have, of course, things like Siri and the Google Assistant and then, of course, Effectiva that are starting to understand the mood of the user by looking at the context. Where is the user? Is the user in the kitchen? So he's looking for a very quick response when he's asking for how to make, I don't know, pasta alla rabbiata. That's all I can make. Or he's in front of his laptop, and he's running a query asking to places to go in Belgrade. And then, of course, the perception. We start to see perception being applied also directly on smartphone devices that now can unlock the screen by just looking at our face. So these are all different areas. And then, of course, motion and manipulation from a Tesla car that drives autonomously to an iRobot that can hover your room. These are all different areas that fall into what we call AI, but really are extremely diversified areas of technology and development. So at the real basis of any AI system, there is data. And there is a computing power which wasn't available many years ago. And then there is data science. Because these data has to be not only curated, but also to be organized in such a way that we can create model to classify an image or to predict what the numbers of visitors on my site are going to be, or if this is going to be the keyword that will grow in the next three months or not. So you need to have a lot of data. You need to have a lot of experience in curating this data. And then you need to have enough time to create your own models. In the linguistic AI and in many of the application that we see today is including, of course, conversational user interfaces, we will see these three technologies combined. Sometimes you see just one. Sometimes you see a combination of one plus another one. Again, they are very different from each other. They're different algorithms, different branches, different areas of research. Natural language understanding, it's something that we can use. And it's a very challenging area, for instance, to create a summary. And a summary can be extractive, meaning that I can take the most relevant parts of a corp of text, and then I can make a summary just by picking up the words that are more meaningful for representing the entire thing. And that's natural language understanding. It's the same technology that it's used for understanding the query that we trigger on the search engine. Natural language processing, it's another set of technologies that includes things like part of speech tagging. So algorithms that help us create segments out of a word and define, OK, this is a noun, or this is an objective that is related to the noun. And things like that. And then, for instance, name entity recognition, detecting a person, a brand, a company out of a text. It's entity recognition. This is also, of course, part of what search engine do when we trigger a query and we say, OK, who is the CEO of Microsoft? They have to understand Microsoft is an organization. The CEO is a role in an organization. And then they can go and fetch the data. So that's natural language processing. Natural language generation is probably one of the most advanced fields because it's when the computer has to generate, for instance, a new summary, not by reusing the word on the input text, but by creating a model that generates a completely new text. And there are very interesting experiments at the moment. It's very challenging, very hard to find an obstructive text summarization that works. Very complicated. Extractive summarization, yes, we can get good results. Abstractive text summarization, there are some recent paper from Google that are presenting some very interesting results, but it's still a cutting edge area. Machine learning in a nutshell. So what do we do when, for instance, in our case, we wanted to create a tool that would automate SEO? So we started with a specific area, which is name entity recognition. And we had to create our own model. So what we did, we had data. And data that we used in our case for creating a Nanopy that worked across multiple language was Wikipedia, because it's open. It's in multiple languages. And so we started to create a model by training our NLP using an openly available version of Wikipedia, which is called Airpedia. So when you create the model, then you can create APIs that a developer can use for extracting whatever the model is capable of extracting. In our case, name entities. So things like SEO, Microsoft, or Tesla as a brand, and so on and so forth. Well, when we move into the world of SEO and search engine, well, this is one of the ways that AI is used by search engine to assess trustworthiness of the piece. We all have seen the effects of fake news and whatever comes when people start to manipulate information by publishing statement that are not verified. So in order to verify statements, for instance, Andrea is the CEO of WordLift. That's a statement, right? So Andrea, CEO, WordLift. So that's a statement. How do we verify it? In 2015, Google published a very interesting research paper which is presenting a concept of creating this knowledge-based trust, which is really a place where Google is storing all the statement. And so if we find a statement that says, Andrea Volpini is the CEO of WordLift on one website, and then he finds it on another website, then he starts believing that this is true. It must be true. I mean, it's on two websites. And of course, the more these websites are alternative, the more it works well. And that's why when we want to create something that gets into the knowledge graph and therefore enters into the voice search, we want to create something in a consistent way across multiple sites. And we want to make it easy on the crawler to find all the different co-currences of the statement. So we do this by interlinking, by creating links on the metadata with other giant graphs like DBpedia or Wikidata or the Google knowledge graph itself. So we want to help the search engine understand that what we say is true. In a way, it's like doing backlinks in the old days but with data. Go for it. Set again? Yep. That's really a new frontier that it's still on the way. But a lot of this knowledge, much like the crawling index, a lot of this knowledge is duplicated. And if we would have a blockchain for these statements, that would be a complete revolution, absolutely. Go for it. Well, I mean, the way that they use it, I mean, the way if you go through this paper, you will see that they do a mathematical equation, which is not so far from looking at the ledger and say, yes, this has been shared already by so many systems. So in the linked data world, there is a lot of now tension in going towards blockchain because, of course, it would simplify a lot the infrastructure if we could share the information that otherwise is accessible in these different knowledge bases. Yes, at the moment, they are very separated. But in the research field, there is a tendency now to look at blockchain as a solution for a duplicating statement and also sharing this knowledge across multiple systems. Yeah, this model? This model, yes. I mean, you could see that I could teach my mom and dad. And if I would have put a false statement, it would be there. So yes, I mean, of course, like any system, there is a flaw. Of course, it's way more complicated as we get more data. As the data gets more structured, of course, creating a fake news, it gets more complicated. But as you can see, it's still fairly easy. We can still give the election to Trump or things like that. So it's still very easy to manipulate systems. Well, I mean, one way is for sure would be like blockchains because, of course, you have to have a provenance, we say. I mean, who is saying that? That's the main issue. I mean, a provenance statement is always missing, or most of the time is missing, especially if you go into these messy data sets, like wiki data. Who would say that my mom is called Anna? I don't know. There's no provenance. So provenance is one of the things that, for instance, a signature from a blockchain could somehow help. So there is a lot of work, but yeah, we're still far. We can still say a lot of fake news. Technical SEO, big terms if you are in SEO world. That's a little bit of a stretch, but we see more and more Google going towards structured data, linked data. As I said, when I started and I was telling clients, I want to experiment with semantic web technology, and your ranking will go up in 2008. People are looking at me like I was crazy. Some people are giving even me money, but it was very hard to prove the return of the investment. These days, with initiatives like accelerated mobile pages and structured data, and all the different variations that we see in the search engine results pages, there are, I think, now up to 37 different ways on a SERP to display a result from the flights information to a recipe, from a knowledge panel to a map. There are so many variations of results in the SERP that everyone now starts to understand how much the data beyond it's helpful. And so linked data is really one of the key elements of the new technical SEO. And we can prove it by the numbers. And Google has recently presented three use cases on their website that show that a website like Eventbrite by adding structured data on their event pages has grown by 100% the organic traffic on these pages. And Google, himself, is presenting this data. So I don't need to even do my homework and improving that this technology is working because Google is doing it. And the reason Google does it is that AI needs it. And that's where we come in. So we're going to now get more into some practical aspects of structured data markup, AMP. How many of you are familiar with AMP? Wow, everyone is familiar. That's good. How many of you are against AMP? A few? OK, good. Why are you against? Yeah. It is still a very complicated issue. I do agree with you. I always say to client, let's spin through it before we do it. It is also true that in some countries, like in Italy, for instance, it does bring a lift in traffic. You move to the US, it doesn't even work in that way. So based on the country you live in, I've seen AMP reacting in different ways. And it is still an investment that in some cases justified, in some cases not. I do recommend it because, of course, if you want to get into this new world of voice search and actions and conversational searches, then yes, I strongly recommend it because I can see the impact of using AMP. But yes, it's still complicated. Google Action, is anyone familiar with it? No? You will be at the end of the workshop familiar with the Google Actions. So you come back home and you will say, I want to do my Google Action too. That's my goal for today. All right, so first message. Focus on the data, not on the eye. One of the three message that I'm going to leave you today is that a lot of the talk about AI, AI-powered content marketing, and then people start to think about the technology, which is yes, it's interesting, it's important, you have to familiarize with the different frameworks, TensorFlow, and the different areas and fields that we've seen before. But really, as an editor, as a publisher, as an SEO specialist, as an agency, you should really focus on the data before even thinking about what different AI system and what's on. Everything's clear so far? We're good, OK? All right, so linguistics semantics. This is my to-do list for creating content that works with voice search. That's really, if I have to give you one slide that you bring home and then you start experimenting with, that's something that I would recommend to bring with you. Feature snippets are still strongly driving voice searches. There is still a lot of experiments on feature snippet from Google itself, so the results are very volatile. They come and go, you get it, then you lose them. But like a lot of the SEO world knows, these things get more and more consistent. But optimizing for feature snippets, it has to do with looking at long-tailed keywords. What is a long-tailed keywords? Someone with a definition. Go for it. Good, query more descriptives is what I like the most because the other part, yes, it might be competitive, it might not be competitive, it depends. But yes, more than three, four words, and it's descriptive. Anything else for describing a long tail? You want to say something? That's very interesting. That's very SEO focused. But yes, the traffic that you get might not be the combination of the keywords. It's a complete different track, and the volume is big if you tap into something like that. How do you search for these keywords? How would you do keyword research for a long tail? Yes and no, not really good start. Any other idea, go? It's good, I'm not a big fan of that. It's good because it helps you understand the questions around the topic. But you're going to really use it and then find something that creates traffic. No, I never, I mean, yeah, good suggestions, but that's good. Yeah, that's better, that's very more practical. I mean, it always worked, but the Google suggests, the Google suggests it does bring a long tail. You can actually look from the search console in the mobile search queries, and you can start comparing the mobile search queries on your side with desktop search. And then you might find something that it's long tail. You might also look at the different commands of the Google Assistant to find inspiration, because the Google Assistant now is covering a lot of different intents. They are the third-party application like the Google Action that we will see, but the Google Assistant itself, it's covering a lot of different intent. So an intent from the Google Assistant might be a base for your long tail keyword, right? So Google Search Console, Google Assistant commands, Google suggests, yes, these are ways of looking at these long tail keywords. And the analytics, of course, of course, absolutely. That's always helpful. At the moment, we don't have really a way in the analytics to look at what's coming from voice, right? That's a little bit of a limitation. They say that something will come, but it's not yet there. What I also use from the Google Search Console, I filter the results with rich result for AMP. I don't know if you remember in the Google Search Console, there is a filter on the queries that you can use for getting all rich results from AMP or no AMP, but these rich results are sometimes interesting to find your long tail keywords. Second point, add structured data and do it using linked data. And now we will see what this means and exactly how this can be done. Write articles, that's basis, but sometimes we forget that when we start talking with Alexa, a lot of the usage that we make of these personal assistant, it's really at the beginning, at the top of the funnel, you can also make a purchase now with Alexa or with the Google Assistant, but rarely people do. The volumes of the logs, the search that people make through a personal digital assistant, such as Alexa or Cortana, are at the beginning of the story, at the top of the funnel. So I am starting to maybe ask, what are the restaurant in Belgrade or where can I eat sushi in Belgrade, which is already very close to the final booking intent, but a lot of the chatbot or personal digital assistant interaction are at the beginning of the funnel, which means that you still have to create great content to bring the user to the next phase, because if you limit yourself to just create, for instance, an answer for your chatbot, but then the users cannot discover anything more, then the conversation is kind of left in a point where it doesn't really bring the conversion. I will give you some examples later on that maybe we'll make it clear, but write articles, not just simple answers for the long tail search query that you have found. Look at Elocussion. That's one of the guidelines from Google when creating voice rating content or machine rating content. Elocussion means that when you read it aloud, it sounds nice, you know? And sometimes when we write, we don't read. What we write allowed. We read it, but not allowed. When you start reading something aloud, then you realize that maybe it's too long or maybe it's too boring or I didn't need this phrase. We become more and more conversational when things are spoken aloud rather than when we read. I will give you plenty of examples where I make the mistake of better Elocussion, so you will see what I mean. Embrace AMP, that's my suggestion. I do agree with people that have still concern. It is a cost to embrace AMP, but it does bring a value on the user experience. There is an interesting, I think, boot these days. It's the first time at the work camp that I see Google, and so that's a strong sign for the community. I don't know if it's good or bad, but go to the boot and try to learn more a little bit about how to overcome the JavaScript issues because, of course, they are doing a tremendous work in creating the new plugin that will allow us to create native AMP experiences sometimes in the future. What is semantics? Semantics is about the meaning of words. So if I say, I love Belgrade, or if I do it this way, I am changing the structure, I'm changing the syntax, I'm changing the symbol, but the meaning stays the same. So semantics is about conveying a meaning into the words. Now, the way that the human language work is that the symbolic information is stored into symbols. It can be an art, it can be a word. And then we share these symbols in our minds, and then we have grammatical rules that help us understand each other. And that's a little bit about how human language work. But how does a computer share meaning? How can I mimic this process and bring information in a meaningful way to a machine? And that's where the semantic web comes in. So I have some information about Genaro who is here with me at the conference. And Genaro has its own properties, its name, its surname, gender, and fiance, and so on, which is on the website A. And then there is a property that connects Genaro with Andrea or Andy that is nose. So there is a property that connects one entity with another entity, and the information about each entity is on a different website, or can be on a different website, can also be on the same website. But a machine will look at the data on a first website, and then will have a link to another website to get more information about another entity, in this case, another person. And then maybe Andrea was born in Rome in Italy, and then there is maybe another website that a machine can consult to understand something about the place where Andrea was born. So each piece of information is linked with properties. And computers use unique identifiers. These are URL, more precisely these are URI, so Uniform Resource Identifiers. And this is mine, so that's my entity, that's the entity that represent myself. And it's a link data persistent URI, meaning that when a crawler or a machine gets there, he only gets an RDF representation of the information that describe me, or a JSON LD that represents the information that describes me. If you go with the browser, it will show you a page. Quite boring, but it's just a page with data inside. But the most important thing is that I have, my unique URI that describes me, and a machine can always go here and find more data about myself and links to other data. I also have another unique URI on wiki data. Everyone knows wiki data? Everyone does? Good, okay. So wiki data is another unique URI that talks about myself. And that's a little bit of an overview of wiki data for the entity that represent myself. And from this graph, we can see that the same person on wiki data that has that URI is also described on another page, on another URI, which is the one that I showed you before. And then there are other information. I'm a human, yes, hopefully. And so in wiki data, using an exact match property, I am saying this entity is equivalent to the entity that it's over there. So a machine, when it gets into a graph like wiki data, or my own website, and find these URIs can get a lot more information about myself. And even if the page is only talking about myself speaking at the work camp today with you, it's also bringing this cloud of information about me being the CEO of a company called WordLift and created in Italy. So all this information is made available to the machines so that they can process it, and then you can go and ask, you know, who isn't Revolpini? And then they would know, because they have enough data that they have acquired from my own websites, from the link data that I published from my own website, and from publicly available resources like wiki data. So structured data, as we see, is really link data. And the foundation of structured data is this area of the semantic web, which is called link data. Now let's ask Google. Okay, Google, what is schema.org? Right, web pages with structured data. So this was coming from my website in the beginning because I was able to create a feature snippet that was about schema and I tag, you know, the page with schema. But you might have noticed that Dave Olocution is not really the best because it's kind of long, you know. Lingua Franca, yeah, it's good, but it sounds a little bit weird. So it's no longer there, but it was there. Schema.org, I think everyone is familiar in the room, right? It's a link data vocabulary. And the link data vocabulary can be combined with other vocabularies because if I need to describe, for instance, I don't know, a medicine, you know, schema might not be the best because of course the medicine has a lot of properties that are very specific to the knowledge domain of health. And so I can use schema as kind of early or basic representation of my content, but then I could also, with link data, attach way more properties than what schema allows me to do. And apparently search engine are also happy when we do that, when we create more resources about the things that we talk because then they can use it for disambiguating queries and providing answers. So it's a community-driven effort, everyone can jump in on the GitHub where the vocabulary is managed. There is a lot of work beyond extensions, so I've been somehow involved in creating or contributing to the extension for the travel industry, which is a work done by the Samantha Technology Institute of Innsbruck, so I collaborate with them in the past and that kind of created a new area of schema for the travel industry. Then of course you have all the e-commerce initiatives and there is a lot of activities behind schema, so I really recommend you to following the language and also looking at other link data vocabularies that are interoperable with schema that you can use for creating better metadata. Anyone familiar with the five-star link data? No, that's, okay, go. Go for it. No, that's a little bit more on the link data, so you know this guy, Tim Berners-Lee invented the web, he also invented the semantic web and he designed a five-star methods for classifying data. So when you create a PDF, for instance, and you put it online and there is a license attach that say that everyone can use the PDF, then you get one star because it's open data, right? But the format is closed, it's a PDF, so you have to have something from Adobe to read it, but it's open data, so first star. Second star, the data starts to be structured. In a PDF we have a text which is unstructured so a machine doesn't really understand what's inside a PDF, that's what we call unstructured data and it's in a proprietary format, so one star. Two stars, the data is structured, there is a spreadsheet and there is a license that says to machine, yes, come and read, right? Two stars because the format of an Excel is proprietary, you need to have a Microsoft product to open an Excel file. Three star, the format is open, so you have a structured data license in a CSV format, way better for a machine, you know, a lot of the knowledge that Google has created around this knowledge panel is derived from the open data that was available in Wikipedia, but also in the Google tables, which is kind of a forgotten project that Google initiated for starting to get structured data. So a CSV is good because it's open and it's structured and it's licensed, but it's not descriptive because if I have a column that describe, you know, for instance, the number of seats in a car, a computer might not understand what number of seats means, you know, for a car. A human can make the jump, you know, and bridge the semantic gap and understand, yeah, okay, so this gotta be the number of seats inside the car but maybe a computer cannot make the jump that easily. So in RDF, which is an XML format that was created by the W3C, every piece of information in this table is described using a vocabulary, such a schema. So a property like the seats in a car have a specific attributes that is described in a linked data vocabulary like schema.org. So a computer can understand specifically what this number is because it's described. Five stars is when the data gets linked with other data. So in the example of my URI, that data was linked with the data on wiki data. That's very powerful for a computer because you can jump on my page and then he finds a reference to the equivalent entity on wiki data and then he gets more information and then he understands, ah, that's the guy. Okay, I know a lot about him. So when the data gets linked, then we have the five stars linked data. Now, the reason I'm talking about it in a WordPress conference is that these does have an SEO impact because the more you make it easy for the crawler to index your content by creating entry points and by describing the data, the more it's easy for them to feel confident when delivering the results. So linked data, I mean, this thing of open data has been for many years just discussion inside the academic world first and then the public administration because of the open data movement. But right now it's becoming relevant also for the SEO industry and that's what we do basically. Okay, Google, what is personal assistance search optimization? And it is referred to the use of SEO techniques where the aim of positioning content is the source of the answers given by personal assistants such as Siri and Google assistant to their users. So that's according to word lift but it's actually a definition that came from another expert in the industry and it's kind of the area of SEO where SEO meets like voice search and chatbots and it's called personal assistant search optimization and it's basically what we are talking about right now. We've done with the first block, we're moving into the second block. Take a breath, you're still all with me most of it. Now we will dive into the lesson learned in creating conversational experiences and the mistakes that you can make. Did anyone in the room created a radio chatbot? All right, what is the chatbot doing? Okay, such as what is the type of question that you can answer? Okay, okay, cool. And where was the chatbot developed? Okay, okay. Telegram, okay, okay. What about, what was the other chatbot? I saw another chatbot, okay. Wow, that's very advanced. Oh, there's no relaxation. That's good. So is it live, the skill or? No. No? What was the other chatbot? Go for it. I don't know if we have a microphone, maybe it's easier for everyone in the room. I can bring it myself. I can do some exercise. You can do that, all right. I always need exercise, but next time it's okay. You're hopping as soon as I'm done, you know. I will start going out looking for BS, so that's gonna be my exercise for the evening. You'll see me around. We developed a chatbot for qualifying leads for the automotive industry. Oh, wow. So basically ask the user if he's interested in a car. If he has already a car. What did you use for creating the chatbot? For the first draft, the messenger API. And then you just created the logic yourself? Yes, okay. But we are developing a new chatbot based on Node.js. Okay, okay. Cool, cool. All right. There was another chatbot over here, yes. We built quite a simple sort of decision tree Facebook chatbot that led users through a kind of recommendation engine for Lego products, which is quite cool. It was very basic. There's no natural language processing. Apply to what area? E-commerce. E-commerce. They could click through to product or buyer. Was it good? Feedback out of these experiments in terms of usage. It's still hard, somehow it's hard. It depends on the intent. Depends on the intent. I mean, if you wanna relax, switch it off. Okay. We've integrated with Slack. Okay. And we have a chat about the deployer that helps us, basically, get code reviews each time we deploy. Oh, wow, that's good. All right, so let's see some of the mistakes that you can make. But before, I'm gonna kind of introduce you into, again, an SEO tactics that we're gonna see into details. So I'm gonna ask now, okay, Google, tell me something about Andre Volpini. Wanna give it a try? Yes. The CEO of Burtain is a visionary entrepreneur, now focusing on savanted web and artificial intelligence, co-founder of Inside Out 10, and director of Inside Out Today, an Egyptian award winning creative digital agency, focusing on the African continent. Andrea has 20 years of world-class experience in online strategies and web publishing. Would you like to hear another back? Please no. Okay, bye-bye. So, we have to work a little bit on the content, but definitely, you see that it's very long, it's, come on, stop it. Yes, okay, why the voice change? Because if you heard, I asked the assistant of Google to tell me something about Andrea Volpini, and I specifically pronounce my name not in Italian, but as an English person would. So Andrea Volpini, but the interesting part is that the Google assistant is responding by asking the user for that, would you let me to ask Sir Jason Link? Now Sir Jason Link is the Google action that I have created for intercepting specific content from the Google assistant users without them knowing me. So the reason the voice changed is that that Sir Jason Link is not the Google assistant, it's an application within the Google assistant. And this mechanism that in Alexa is still, I would say not as developed as in the Google assistant environment, it's called implicit discovery. That means that Google is searching for the intents that your AI is covering and is proposing your agent, your chatbot, your assistant to user that don't know you. And so in a way, in our AI first world, that's the new SEO because I'm using my voice to speak with Google Home or a Google Android device or whatever and I am asking a long tail query such as tell me something about very generic. I mean, it's not gonna convert, I'm not gonna sell subscription with this, but it was interesting experiment and Google much like it does on a SERP is recommending my AI to answer to this question to whatever users. And I was able to make 670 conversation in a day just using Google. No one knew my assistant app. I mean, it was just a test. But an interesting amount of traffic arrived on the AI that I have created because Google was recommending it for specific intents. So then I started to think, how can I bring traffic back to my site? I have 670 users there, how we drive it and then you will see, I find a way and I was able in this experiment to resuscitate pages on my site that had zero traffic and bring some level of traffic which was for me very impressive as a result. So when you start creating a chatbot or a conversational experience, we didn't use NLP or yes, we have a decision tree. Really, when I started to do this work which is really started as an experiment. I was challenged by a guy called Scott Abel who runs an agency called the Contem Wrangler in the Silicon Valley and he said, Andy, you have developed an amazing tool but show me what you can do. So I say, come on. I mean, look at the tool but then he challenged me and then he said, okay, what can you do with the semantic technology that you have to engage the users, to engage the reader more? And so I started creating this experiment just to respond to a challenge that I received from a guy in the States. And so I really wanted to make my website talk somehow and then I start thinking about examples of in the past and then Pigmalion is one examples. We also have in Italy Pinocchio which probably some of you read it, you read it, right? So I mean, you have this inanimated thing like a website, like your website and you wanna make it talk and you really have to put all the love and the good spirit and the intention that you have to make it talk because it's really an arduous process. So that's one thing, you have to decide that you really wanna make something inanimated like a website talk. But then of course there is, you do need knowledge, you do need a graph. And I don't know how many of you are familiar with the Connysburg Bridge problem? No, Connysburg doesn't, yeah, what is it? Yeah, finding the shortest path to cross the seven bridges of the city of Connysburg without going to the same bridge twice. That was the mayor's goal and we are in now Russia, now Germany but we were back in 1735 in Prussia and a Swiss mathematician called Euler decided that he wanted to help the mayor of the city of Connysburg answer to the question and so he created a mathematical theory that now we call graph theory to demonstrate that it was not possible. Wow. And so you need to organize content in a graph in order for making information accessible and conversation can move forward when you have more information, more data. And then Eliza Weisenbaum is the first chatbot ever so if you are starting to develop a chatbot start playing with Eliza first because a lot of the dynamics in the conversation that are used in today's chatbot frameworks are still based on what Weisenbaum created with Eliza and the problem of Eliza is that he didn't have the knowledge, he didn't have the graph so he had to use the input of the user and repurpose it. These are a little bit of the three steps on creation of a conversational experience. You start with an inspiration and you look at the intents that you wanna focus on much like we described on the long tail keywords you wanna look at the intent so what is the intent that your application is gonna cover, that's very important and then you can cover two, one, two, three intents but you're gonna be too broad because otherwise it's not gonna work and then you have the design, the validation and the creation. In my examples, I use Dialogflow which is a bot framework that was acquired now by Google so it works very well with the Google Assistant and it allows you to create Google Actions quite easily and I use WordPress as a back end and then I use, of course, WordLift and I started to look at intents that I wanted to cover with my chatbot and I thought, when people are in front of a website what do they ask, what is this website about and what are the main topics, who is the publisher so I started to think about what question people wanted to ask to a website and so that's basically Sir Jason Link in a simulator that allows you to talk to my website and I think you should talk. That's convenient and my name is Jason Link. You can ask me facts about the upcoming events, information about the publisher of this website or I can help you know better what this website is about doing very well. So you can tap into Sir Jason Link using now a window on our website, Google Home Devices and direct links that we are creating from our website to Sir Jason Links. This is a little bit of the new Google Analytics so the data that you get out of a conversation that allows you to see the number of session, the number of queries per days, how long the application is taking to respond. If you wanna take the analytics forward then I do recommend you if you are creating your own action or your own chatbot I do recommend you to look at a chat base or bot analytics as frameworks that allows you to measure more properly how the conversation moves forward because there is a lot of learning that you can make from this data. So this is a little bit of the session flow from Sir Jason Link. You can see that a lot of the people focus their attention on the topic which is like what is structured data? You can ask Sir Jason Link or what is semantic SEO. So a lot of the flow goes there and then you can see and debug and see, people are dropping here because next events, it depends on the events that we mentioned on the site. So you can analyze the pathways of the user using a session flow which is available in the various tools that you use for either creating the chatbot, in this case this is from Dialogflow or analytics for a conversation experience. And these are the areas where you want to measure the way that the chatbot is responding. It's like a framework for, is context being taken into account? Does the chatbot understand that I'm in Belgrade or does the chatbot remember that I already present myself because you don't wanna hear again for instance the introduction. You know, is the response relevant? Is error managed properly? So these are different areas that you can use for analyzing the performance of your chatbot and think about chatbot as websites and think about WordPress as your knowledge base that you wanna use for creating something like Jason Link. Keep the Dialog short and simple. Rule number one, I never do that. I always have these long phrases and then I started to add a new functionality that cuts the phrase after the first sentence or after the second sentence. And then I added another intent which is would you like to know something more? That's good because maybe the user does want to know something more, maybe doesn't. Be brief, personalize to the user. If the user has logged in, you know a lot of information either on Alexa or on the Google Assistant. Test it with real users before going out to the crowd. I didn't do that. I went out in the wild. I probably disappointed hundreds and hundreds of people in the first days until I saw the logs and I said, I'm not intercepting this intent. Let's remove it. It's like a far west nowadays. It's like the web in the 1990s. So there's no many people around. You can still make interesting thing. And then of course focus on making your chatbot discoverable. You can advertise your chatbot on Bing with paid advertising. You can use the Google Action Directory for presenting your chatbot. But again, focus on the quality of the user interaction, not on the eye. That's the other message for you. So just a wrap up. We've seen the, this is again an example of asking the Google Assistant something about my company and my action is popping up. That's very good for me because then I can control the user experience. So if someone is interested about WordLift, then surges on link is called and then I can manage it and maybe I can explain what the plugin does and how you can optimize the SEO and so on and so forth. So remember, this is a discovery technique that is very powerful and it's gonna be like the new SEO. What you can do, structure data AMP, you will get the feature snippets, you will get the first voice search responses. Second step, claim the directory page on the Google Assistant Directory. It's very important. Some of you may have received a message from the Google Search Console that tells you to claim your action on the Google Assistant Directory. These only happens if you are tagging news, podcast or recipe. So if you're using structured data for news article, recipe or podcast, you might have the chance to claim your directory page and Google will do the rest without the need of even creating the action. Third, best option, make the mistake I made possibly less and create your own custom action for the Google Assistant. So that's a structured data and AMP example. I asked to the Google Assistant, what is semantic SEO and is responding with a feature snippet that comes from my website. You just have to have structured data and use AMP that also help. Second, claim the directory page. This is a directory page I created for a client that is using news article and so Google is creating a nice presence in the Google Action Directory for them. And then of course, third step, use link data, use natural language processing and then create your own custom action. I think we might have some minutes for a demo or not, no, five minutes, how many minutes? Yes. All right, I mean, so this is basically how Dialogflow looks. You have the different intents which are the questions that the people can make and now this is the logic that creates the conversation and this is the publishing platform. This is the Google Action publishing platform. So this is my buddy, Sir Jason Lynx, is with me. You don't see him. That's the 200, 2000 monthly conversation. I mean, just to give you numbers, our website is still fairly new and we reach probably around 6,000, 8,000 users with the website so I'm, you know, 2,000 conversation for me, it's a huge number. Let's go in the simulator. That's an area where you can actually test how things are working. All right, getting the test version of Sir Jason Lynx. Greetings, I'm WordLift's companion and my name is Jason Lynx. You can ask me facts about the upcoming events, information about my publisher or I can help you know better what the main topics of the WordLift.io website. So now let's jump a little bit on the website. This is our own website in production. I hope I don't make any mess. First thing I have to remember how to get in. It's just the caption, that's easy. Ah, yeah, that's right, that's right. I can do that. I guess I don't see it. I should remember it. So this is an article about the WordCamp and you see WordLift is running here so the NLP is extracting the concept about this article and I'm talking about here, about different topics. One of the topics is Gutenberg that probably is one of the things that we wanna hear about in this WordCamp. So as an editor, I'm just highlighting Gutenberg as a topic that represents this article. WordLift is creating an entity for Gutenberg that describe a little bit what Gutenberg is and it's creating this entity using data that I have on my site or that is coming from Wikipedia, right? So it's creating a new entity page. Now this new entity page has its own unique URI in the linked data world so it's published on data.wordlift.io and while it loads, it's also made accessible through the chat box so I can go here and ask. The new editor for WordPress, what else would you like to know? So he's pulling the data from the entity that WordLift has created about Gutenberg and it's also providing me a link back to the page so this is a way that I was able to generate traffic back to the site because some devices have a screen that I can use for creating reach cards that have a link back to my site and you're gonna see this traffic as coming from Google as a referral because it's really coming from the Google Assistant platform. And we have a 500 error here which is not good but. So this is a little bit of a way in which we can move from the website content into the chat box so I can make more complicated queries on the structured data that I have behind the entity such as, when is the work you're taking place? This is one of the actions that Sir Jason Link supports. He will go back and he will make the query on the link data that WordLift creates specifically on the JSON-LD of this page. And then it will give the answer back to Dialogflow that will send it back to the chat box. And then I can create more complicated conversational experiences by using queries on the knowledge base that I have created. So if I ask, what are the next events? That requires a level of computation so I'm gonna run a query on the link data and get the results and then Sir Jason Link from the website is going to create cards about the upcoming events that we're going to attend. So long story short, focus on great content, not on the eye. So create pages that people wanna read. Yep, okay. We're done, so I mean. But if there are questions, I'm happy to take them of course. Well the data that Sir Jason Link uses is yours because it's created from your website. You can create Google action, not necessarily with Dialogflow, you can even write the code yourself and just tap into the Google action. Dialogflow is really a framework that allows you to train the system and design the intent in a more easy way, let's say. But any other questions? Yes. Giving all the data to Google is like my problem. Yep. Versus, yep. For trying to do closed versions of robots. I think that it all boils to one specific point that is data rights. Right now Google is angry for data and it's not paying for data, right? So we are willing to give data for free because we want to earn visibility onto its platforms. But we are getting to the point where I'm going to put a license on my triples and so if you're coming and you're consulting Google about my data then Google will be able to pay me back. And that's again kind of a blockchain logic because if we give all the data for free then I don't know if we end up in a better society or worse probably I think we're gonna end up in a favelas. So I believe that everyone should retain the value of his own data. Right now I believe that open data is the framework because it doesn't mean there has to be free. Open data means that you can license it and if Google can use it, in my case it can use it, I'm fine but I have to, you know, I have control on my data. I can decide anytime that I put a license, sorry you're using it, I sue you. So yes, data ownership it's important. Good point. Yep. I'm using Google Assistant but I have also prepared a slide for you because they also use Alexa. So these are two technologies and plugins for people that wants to work on Alexa that I would recommend you to use that I used and yes, this is my Alexa experiment. Ask WordLift. So skill is called WordLift. Ask WordLift to read the latest article. A machine from the content posted by Scott Evil. Second, the outsider integrating non-founded executives in the family business. Read the first one. The webinar will be on September 14th posted by Scott Evil and with Andrea Volkini as a guest. You will learn why making your website machine-friendly is key to grow your organic traffic and how to prepare content that works well with personal assistance for its search. So I'm a little bit behind in Alexa because I focus more on the Google Assistant and the Google Action environment. The reality is that they are very similar and again, if you have data structured, you can create your own experience on multiple services. Things like Dialogflow allows you to expert a skill into Alexa. So connect the website to a chatbot framework and then from there you can integrate with Alexa, Google Action and a different Skype for instance. That depends on the device, yes, depends on. Yeah, it is. Then I think this is the Pixel 2. I think if you let it do it, it will do it. Yes, yeah. I don't know what you're using. Yeah, well, an entity is a specific thing. So it's something that exists in the Knowledge Graph. So something like a Chinese cuisine is an entity. Something like a cheap laptop? No, because cheap is an objective of a thing. So an entity would be laptop. I can create an article that has the entity laptop and then reference to the concept of being cheap. But cheap laptop is not an entity. So I would not use that. But Chinese cuisine, rock and roll, go for it, it's good. So everything where you see a Knowledge Graph panel or you have an article on Wikipedia, you can think of it as an entity, really. That's a little bit of a summary, question we've done. That's another one that you can use on Google Action. Dr. Search Marketing, I created, this is like a trivia quiz for SEO experts. Go for it. If you take a four out of five, I give you a t-shirt tomorrow. All right, thank you.