 Hello to all of you. I'm Julia and I'm the Secretary General of NEMO, the Network of European Museum Organizations. I'm very happy to welcome you to this first webinar in cooperation with our associate member Nemeck and the University of Florence. Today we'll hear from them a bit about new media for cultural heritage and I'm super happy because we had a lot of interest from a lot of people for the seminar and the seminar was completely booked after a short period so that means that there's a big demand. So I think this is something that we are going to continue. The basic idea behind offering these webinars to museum professionals all over Europe is to empower them and to support them in their professional development. And what we want to do is to apply a European scope to it and that's what we do at NEMO. So I think what we finally believe in is European values and we want museums to cooperate across borders to learn from each other. So within this year we'll be organizing two other seminars, webinars, one on emotions in museums and exhibitions and one on museums and intercultural spaces. That's something that relates to the migrant situation in Europe. But now I think I'll leave the floor to my dear colleagues Alberto del Bimbo, Andrea Ferracani and Daniele Pezzatini to enlighten us about new media and techniques for cultural heritage that they have developed. Thank you very much and I hope you enjoy the webinar. Good afternoon. Do you hear me? Alberto del Bimbo is speaking from University of Florence and welcome to everybody and we will give a 15 minute presentation of new digital tools and techniques for cultural heritage and museums and it will be followed by two other 15 minutes presentation by Daniele Pezzatini and Andrea Ferracani. The goal of this presentation is to have a short introduction with some examples on a few technologies that are available today and can be exploited in museums to improve the quality of presentations and offer to visitors. So let's start with these introductory issues and talk of a few technology trends. I will follow these trends with two examples we are currently doing in Florence and that hopefully will give you some idea of what can be achieved today. So first of all a few statements. Well, the blue sentence has been taken from the literature and it's a broad definition of what museums should do, expanding our knowledge, stimulate our senses, expose to new experiences and engage the participation of visitors, sharing history and natural world. But we believe that information and communication technologies has given a very important opportunity today, the opportunity to turn visitors from being passive observers to being instead engaged participants. So that means that the visitor can be, can give a contribution within the museum and can receive personalized information from the museum and cultural heritage sites more in general. So one very important point of this age is that I see technology today can be human-centric instead of object-centric as has been in the past. So in other words provide support for the visitor directly, not only just to complement information about the objects exposed. And another very important point is that we can really today start giving user-centric personalized dialogue between the museum and the visitors. In other words, addressing each individual visitor with his or her individual interests and heavier habits. So let's see why and how the information and communication technology today can do this new trend. Well, here I reported recent statistics on mobile and social. In other words, how much mobile devices have been expanded today and how much the social platform are following today. So we have currently about 7 billion and a half people in the world and about two billions of them use mobile, use mobile systems and social platform on the mobile system. So that's a very important. So about one third of the world population is using mobile and social platform on mobiles. So sharing all the data that they can capture with their mobile system. But if this is the picture of the today situation, we are going towards a very different context that is the sensor-based context. There is a Gartner estimation, given in the late 2015, that estimates that in the 2020 we will have about 21 billion sensors connected to the Internet. So providing data and this data can be everything is connected to object. So we can estimate object states like temperature or conditions or also proximity of people to objects and so on. So this is the true revolution we have been from using sensors and eventually very sophisticated the sensors like cameras or voice recorders. Another very important trend is the miniaturization of technology. Here in this slide I reported an announce that was made in the year 2016 by Eptagon company in Singapore that announced that in two square millimeter area they were able to put cameras, sensor electronics with some important features. So a lot of things just in a very small 350 microns thin sensor. So that means that we will be able to wear these sensors like buttons of a jacket. Another important point is the capability of making user profiling. Already today we know that we are profiled. We are profiled using business platforms about our interests or let's say macro interests and people selling objects greatly rely on this platform to suggest new objects to buy or share. We believe that when we talk about cultural site and museums in particular these kind of macro interests are a little useful. In particular when a person is making a visit in a museum for example of course is attracted by objects that reflect their macro interests but much more are attracted by artworks and objects based on temporary interests. So the interests at that specific moment of the visit or at that specific emotional condition during the visit. So we believe that analysis as is obtained by social data and big data that we live in the internet is little useful to provide personalized experience and given the trend of using sensors in the very next future we believe that interest can be discovered, the temporary interest during the visit can be discovered and answered thanks to sensors more than thanks to social networks and social data. We talked about the capability today already to have sophisticated sensors, sensors with intelligence. Well machine learning and artificial intelligence has made enormous progress in the very last few years let's say in the last four years and particularly the deep neural network technology has made fantastic progress in recognition of visual data and recognition of audio data. These kind of networks are artificial networks that in some way try to reproduce connectivity that is in the human brain in a completely different way of course and they have been applied in a lot of fields and today especially in the field of computer vision there is huge progress in this area that makes that gives the machines the capability to see and understand the content of images they see. This capability in computer vision can be exploited usefully in museum or cultural sites. So what I will give you in the next few slides are two examples of the current possibilities of this technology in museum sites, in Florence, in experiments we have done and I want to suggest that these kind of systems are really capable to understand contextual behaviors and situational conditions of visitors in museum. That means they are capable to provide the visitor the right information they need given the situational interest and at the right time and the right place. So let's see the first of these two examples I have prepared for you. This example is the Nemozyne project that was funded by the original government of Tuscany and is now under test in a museum in Florence the Balgello Museum. This museum is a museum of Renaissance artworks particularly there are important statues by Mattello, Michelangelo and other 16th century sculptures. There is a huge room the so-called Donatello room where there are about 10 important masterpieces of the 16th century and well visitors usually go there and well they are astonished by the place but it's hard for them to understand what an artwork means to get a complete picture of why that artwork was done and so on. So what we believed was why don't give more information to the visitor but only for those artworks that are really interesting for the visitors. So we had to measure the interest of the visitor of each individual visitor to the artworks exposed and analyze what artworks were most interesting for them. So we put four cameras in the in the hall and these four cameras have the duty to observe the visitors and their behavior so where they stop where they look at and in the end the system tried to understand which one of these artworks were really interesting for the visitor. At the end of the room there is a table an interactive table where if the visitor came close the table the system tried to recognize the visitor among the very many visitors the system has observed and understand his or her interests and provide the more in-depth information only of the artworks of maximum interest for that visitor. So let's see a short video that shows what happens. I just played a video and this is the Bergen Museum. Formerly was a prison but now is a fantastic place. I suppose that some of you already has visited that and this is this huge room where we installed cameras. You see a camera on your right side. You have four cameras here depicted in this slide that cover almost all the area of the hall. Around each artworks there is an area of interest so that these ellipses that you see depicted in the images so that when a person entered the room the system detects the person you see the squares the rectangles around the persons and assign the person to the artworks where he is closed at. So you can see the different colors of the rectangles that are the same of the artworks of the ellipses to which the person is associated. This is done by the system in real time that means 15 frames per second so the system has to detect the person and associate the person to the artworks. Of course we want to provide individual answers to the persons so we have to distinguish each person from another. To this end we provide special descriptors that give individual representation of the visitor. When the visitor gets close to the table you see here in this slide then the system takes 10 seconds and presents what the system has understood about the interest of the visitor and you see different green colors that means the more the green the more interest is the artworks and you can interact and look in depth more information about stories of the artworks and so on. This system is currently under test but we have a detection rate about 70% which is relatively high and in accuracy of profiling that's very high close to 90%. That means we almost are able to understand all the artworks that really were interesting for the visitor. So another project we are doing that is showing to you the capability of computer vision to understand what is in front of the camera is this intelligent audio guide project. In other words we want to provide a guide to the visitor that is able to detect what objects the visitor what artwork the visitor is looking at and providing information about that object. So differently from the current audio guide the system has not to type any number or that gives that says the guide which artwork is in front of but the guide understands what artwork is in front of the visitor just from the camera. So we have a system that you see on your left side of the slide which is a tablet simple tablet system so very light to wear and we implemented in this system this deep network architecture and the system is wearied by the person so that when the person enter in the room the person the system camera looks at the artworks and distinguishes whether it is an artwork what the camera sees or a person and you see in the background some persons coming for example and as you get closer to the artwork the system is able to detect which artwork is that so recognize the artworks among the artworks as posed in the museum. This is done automatically in real time what you see now is just the prototype system wearied by one of our students that simulates a visit in the museum. So he come close to the artwork and the system recognize which artwork is that with no mistake so it's not a miracle but is simply a system that has been trained to recognize these artworks and distinguish these artworks from people independently of the viewpoint the emanation conditions that are very many different from side to side and also independently of the fact that the artwork is partially occluded or not and also despite of the difference between artworks so this system is able to have almost no false detections and recognition and we plan to build a working system from this prototype very soon. So these are just the two examples I promised and they were just to show you that some intelligent sensors are really on the hand today and are already possible to be implemented in interesting solutions for giving visitors individual information so the information that they really want to have. Thank you very much I will pass the position of speaker to the next speaker that is Daniele Pezzattini. Thank you very much for your attention. Okay so Professor del Bimbo give you a very broad and like state-of-the-art view of what we can do with technology now and he gave a really like state-of-the-art insight of the search. What we, Andrea and I will try to do now is to give you some example of application and technology that you can adopt which is already available in the market so something you can already buy and use. So some of the opportunities that these technologies offer the one that we will discuss today are to understand the context and the location of a visitor inside a museum or like generally in an indoor space and provide information accordingly. Other technology we will discuss today allow user to interact physically with digital content. We see what that could mean and also can allow to explore in an immersive way virtual environments that can be recreated from real environment or from environment that are not available anymore. So we will start with the example and then we will give you the explanation behind the example on how the application was developed. The first one we will show which is about the use of context and location inside the museum was developed in the museum and house of Rubens in the Netherlands. So the museum developed an application and they provide visitors with a tablet and as the visitor walks through the museum to the indoor and outdoor space the tablet can detect where the user is exactly which room and what artwork are next to him and can provide him with information. Like now you see that the application detected that this visitor was in the courtyard he showed information about the courtyard and then as the visitor proceed in his visit there are like more in-depth information about the artwork the user is currently watching. So here for example there are these secant fine game where the visitor has to find relatives of Rubens inside these artworks. So the way this application is done is using this IBECON technology. Some of you may be already heard of this technology was introduced in 2013 by Apple and this small color thing you see on the right side of the slide are like five centimeters big transmitter and they use this Bluetooth low energy technology is called to communicate wireless to smartphone and tablet that are nearby. So basically what they do they are just notifying continuous leader presence and so a tablet who is passing nearby one of these transmitter can understand which transmitter is in the proximity and therefore understand the location of the visitor. So this small transmitter you can buy from several manufacturers I go to hear some reference but you can really find many now because they are like really diffuse in the market. So what can we do with this sensor as we discussed from the example using these BECONs we can understand the location of the user inside our museum and therefore informing with a contextualized information which is triggering on the app the information about the artwork or the object or the room in which the user currently is and this can be done similarly with other technology you might have seen QR code or a near field communication application but in these cases the user is always required to actively do something either to scan your code or to put the phone next to a sensor in this case IBECON works without the user triggering any input. They are low energy consumption that means that they have a battery inside that lasts around two years so once you position them in the environment they can stay there for a long time and they are low cost devices because each of them now costs around 20 euros there are also some that cost like five euros and maybe they have a lower battery life but the price of these devices is decreasingly month by month. So in order for a museum to adopt this technology which are the requirements so from the museum side it's required to install these IBECONs in the environment this means that museums should buy enough of this BECON to be placed in the room okay the range of this transmitter is around up to 50 meters but the more BECONs we have the more precision we can have in detecting where the user is we can reach a precision of two centimeters for instance. Another thing that a museum should provide is a is an app for a mobile so basically there should be a software house or developer who should with the museum develop an application for the various platforms like app for android and and it also and the museum also has to promote these up to the visitors so that they know that when they enter the museum they if they want they can install an app and and leave this interactive experience. On the other side the museum can either provide visitors with tablets as we saw in the example the tablet was given to the visitor by the museum but one of the trend now is called bring your own device where the visitor is supposed to already have a smartphone and this is most of the case the most of the case a true scenario but the requirements we need on the on the smartphone of the visitor is that he has a smartphone with this Bluetooth low energy support and the support is now active in almost every smartphone sold after 2013 so in the next few years we could consider every new smartphone as a Bluetooth low energy enabled the Bluetooth sensor must be switched on and and the visitor of course must install and launch the app that the museum provided so you just have to open it and and leave it in the background and so whenever you approach an active spot the app will give you a notification that he's around a place with information but he has to install it so as I said before the way this technology works is that this small beacon that you see here the yellow and the blue they transmit continuously and even even higher number and the phone the smartphone on the tablet that are in the range of this transmitter they can understand the distance but approximately understand the distance from which this sensor that was meeting so as you can see in the small image the phone is currently sensing the blue and the yellow beacon and so he's able to understand that the phone is approximately closer to the blue one and three meters from the from the yellow one so as I said before the range the maximum range is around 70 meters but they can also work in a less powerful transmission so they can only be identified within few centimeters or within two meters or even more one of the main problem of this technology is that when trying to estimate these distances the signal can be noisy because this Bluetooth wireless signal can be absorbed by liquids such as the human body so if there is a person between the phone and the beacon the signal is absorbed and also environmental elements like walls and columns they can reduce the strength of the signal so that cause problem when trying to localize so one of the the best thing we can do with this application is to understand when the user is next to a point of interest it's a bit harder to understand exactly and with higher accuracy where the user is because that might vary based on the on the noise the next technology we will introduce is what we call physical interaction so I will explain you some of this technology and then Andrea will continue on this on this trend with physical interaction we mean that the user can interact physically with digital contact and like the most famous example of this of this metaphor is the multi-touch surface so here the example I've selected is in the Smithsonian National Museum during an exhibition about the city of Cusco in Peru and so what they did they developed this multi-user interactive application that is presented on this big screen which is 82 inches wide so more than one user can interact at the same time and they can explore the city of Cusco even from the Smithsonian Museum they can navigate through 3d models for also street views and activate interviews and study some architectural elements that are reconstructed in 3d by touching them so interacting directly with the physical contact another example I selected for you is the Oakland War Memorial Museum where the multi-touch experience that we are mainly used to see in the museum are announced is announced with the use of interactive objects so basically this colored object you see on the surface can be manipulated by by the people in this case it's a game for kids to learn how to find let's say treasure but mainly it's like to explore and to find the historical items in the territory so in this case the multi-touch experience is also enriched with the physical object so physical object can be used directly on interactive surface to to collect digital elements and these of course it's really entertaining especially when we are doing the application for kids that can actually interact with physical objects so the interactive tabletop technology something that is a it's pretty common to see in a museum the first prototypes of this technology were already introduced in the in the 80s mainly in research centers and then now we are all used to use multi-touch multi-touch technology since the smartphone diffusion so basically the way they work these are like screen as a normal displays but they have means to understand where the using is touching and to track the movement of the user finger on the surface and by this it can provide the user with some gesture such as in panning a big surface or zooming so to enlarge picture and going in depth and have a more broad vision of multimedia content the opportunities that this technology offers that interacting directly with hands with digital content is much more natural than interacting with keyboard and mouse as we are used to do with our laptop because we can directly touch the content we want to activate also the the idea of the table is something that already belongs to to the human society so the table is an object that is common is an object that called from for collaboration because there's a place where more people can can gather and so this creates new opportunity also to create new digital application and to experiment also the interaction between people and to provide them with information so since the multi-touch table is a physical object when a museum wants to adopt one of these table there are also some physical thing that we should take care of one of these is called affordance so is the is the property of an object that define its possible use so in the case of a table the way we set up the table define also the way people will use it so if you see on the left if the table is placed next to a wall then the active user will just go on the on the long side and all the other visitor will stay on the sides but if you put the table in the middle of the room then visitor will approach from every side and so this can enhance interactivity another option is to put the the table in this case is called surface for interactive wall in a vertical position so that only user facing different just to mention the few problem that this technology introduce one is the cost because this display the bigger they are the more the cost let's say for a medium to large size display we are around 5 000 euros and also this in technology is mainly used standing so the people would suffer from physical fatigue after a long session so they're usually they are not meant for long content and long text or long videos to be watched so I think yeah I finished my part so I thank you and I will be able to question and now okay hi hi all I'm Andrea Ferracani researcher at MIG and Daniel has already introduced the subject so I will talk about the two widespread technologies used for building smart and engaging applications also in cultural heritage and these technologies are the Microsoft Kinect which is a device able to capture the movement of users and the Oculus Rift which is a immersive display and first of all we talk about the Microsoft Kinect but I'm going to show you a video of an application realized with the Kinect and this is an application developed in 2013 for the night of museums museums in the Hungarian National Gallery and the installation was created by the Google Art Gigabixel project which is a project by Google aimed to provide big very very big images of artworks imaging that these are these images are about one gigabyte and okay I show you this application which uses the Microsoft Kinect so as you can see there's a user that can zoom sorry and then zoom on the interface and he moves in the space stepping forward backward and also and also aside so stepping forward you can zoom on the image and stepping forward you can zoom out of the image and stepping aside you can change the artwork so the application is a demonstration that applications can use physical position of users and physical interaction of users and okay you can also point with your hand and decide which part of the artwork you want to see okay so I have given you an idea of what you can do with the Microsoft Kinect but what it is the Microsoft Kinect it is a device capable of detecting the movement of the users and also to track all the joints the skeleton joints of the users so you can control an interface without without using additional controllers like gamepads or remote if you are used to gaming consoles and the device has been created in 2009 by Microsoft and has been integrated in the Xbox console for gaming but from 2012 Microsoft has also distributed a development toolkit so you can use the Kinect to build your own applications okay there are two versions of the Kinect and which are version one and version two which have different capabilities and so different resolution obviously the latest release has a better resolution and which are the data that the Kinect can capture essentially the first image that you see in this slide is shows what the Kinect sees using the RGB camera so essentially it gives representation of the world like the one you could have taken a picture with your phone camera the second one it's a view from the depth camera so the Kinect generates a depth map of the world where each point of this great scale map represents the distance from the sensor so if you turn into the previous slide if you watch the bottom right image the version two of the Kinect you can see that the Kinect has a RGB camera from which it has taken the first image and a 3D depth sensor so you can have this representation in great scale of the distance of all the objects with respect to the sensor how it works this technology the Kinect has an infrared emitter and also an infrared camera so it casts point lights on the object that are in front of the device and it's able to understand from the deformations of these point lights on the objects the distance of that point with respect to the sensor so I to give you an idea of what the Kinect sees of the world this is the sensors and okay this is sorry this is the position of the sensors okay and as you can see the Kinect radiates all the room with point lights from the deformation of this point you understand how far each of these objects so it's a cool technology and okay it can also understand joints information so on the right lower side of the slide you can see the Vitruvian man which is the famous looking from Leonardo drawn in 1490 and the Kinect can track up to 25 body joints so you you can understand of each user where he is moving his right hand his left hand his left foot and so on for 25 joints and then and and it can also track up to six skeletons so six person can move in front of the Kinect and the Kinect can understand the position of all the joints of each of these six persons to need to control the applications this is a table which represents all the capabilities of the two version of the Kinect with the differences and I list them all you can detect persons up to six persons so he can understand that there are six persons he can track skeletons all the joints the position of all the joints of the users up to six in version 2 and up to two in version 1 and he can track 20 joints in version 1 and 25 joints in version 2 you can understand if you close or open your hands in front of the device and in version 2 it is also able to understand the body tilt for example you can use the tilt to build gaming applications or fun applications in version 2 it is also capable to identify gestures so if you make some gesture with the hands wipe and so on face recognition is enabled in the tube versions and version 2 it can also understand emotions so if you are laughing or you are hungry and so on and the speech recognition is also supported so the development kit can also understand what you are saying so you can build interfaces which can react to speech from to what you are saying okay which are the opportunities of using the Microsoft Kinect for building applications it is a very useful one when you need to detect people presence so let's imagine that you want to start a video starts an applications or switch on the light when someone in the museum enter a room or or pass by okay and it's suitable for natural interaction applications so for applications that do not require the users handle or control the interfaces with additional device like console or the or special controllers so only using their hands and their movement and also is very suitable for interactive installations which provide gaming capabilities or user physicality where you can move and act as in the real world which are the limits of the these kind of applications the Kinect has a limited a limited detection range so it can see people only in a certain range which is from 0.5 meters to 4.5 meters in depth so it is not suited for a very large you can you have to dedicate a special room for this kind of applications and I have already said that it supports up to six persons so for example you cannot build a football game because you need 12 persons and a strong limit is the fatigue due to prolonged use so you have to stand up you have to interact with your gesture so so fatigued so you can stay a long time interacting interacting with this kind of applications it does not work well in bright rooms because as I said before it works understanding and calculating the distance of objects using infrared light so if you have a room with large windows where the sunlight filters it it can it cannot work well and it is also sensitive to noise and occlusion so you have to provide a dedicated room because if a user is interacting with the interface but someone passes by or occludes the user so the application can not react consequently how difficult it is to use the Microsoft Kinect it's very easy if you have an application so you have only to plug in a device on your laptop or your personal computer you have to install a software but it is very easy to use the application have to be developed by a developer okay now I'll talk about the Oculus Rift the other device which allows to build immersive environments and here an example from from the European channel where you can see a user with a display on the head that play with the Oculus and as you can see it can move inside a reconstruction of a museum with here you have the stroboscopic view okay and now okay as you can see you can enter the room and you can have a real experience inside the environment okay the Oculus Rift is only a display that can simulate the immersion of the user in an environment it was created by palmer lucky and we that is a self-taught engineer and a passionate of electronics and it was founded on Kickstarter say in 2012 which are the opportunities of this kind of technology you can experience in first person virtual places and these places can be in the real life inaccessible due to several reasons for example imagine that you want to experience the musul museum in Iraq which has been destroyed last year by ISIS you can do these only with this kind of the technologies these technologies that gives also the capability to interact with the objects you can see and this is not always allowed in a museum so for example imagine you can touch the object or because they are in glass cases and so on but in a virtual world you can touch the object and have insights or asking sites about the object itself and you can also make pre-previsits to arrive prepared to an experience so imagine you go to the louvre and there are a lot of artworks you don't know anything about the artworks so if you have pre-trained your ability to recognize or you have information about the artworks your experience can be more engaging one of the limits of this technology is the so-called virtual reality sickness and it is a feeling of days and the nausea then a user can feel when using the rift due to the fact that what your eyes see is not in sync with what the body feels and this issue has been mitigated in the versions in their in the more rational versions of the obelisk rift through the use of the tracking of the head so you have a camera you can put it somewhere in the room and they try to understand your movement the movement of your body and another issue with the Oculus is that it has a not so good resolution but it has been improved in the commercial version which is the last one and so as you can see there are three versions of the Oculus developing kit one development kit two and the last one this is the commercial version where they have also introduced a controller through which the system understand the position of your end and you can also grab object moving the space and so also the Oculus is very easy to install so you have to plug in the Oculus via USB and a software but to build the application you need a software developer okay I show you a demo of the Oculus and this demo is about a the Museum of Modern Art in New York and as you can see you can do cool things okay I'll show you this is a famous painting by Salvador Dali you can enter you can move in the old of the museum but at a certain point in this application you can also enter the painting that was painted by Dali in 1931 and you can have information about all the objects and the meaning of the painting and the objects in the painting so the meaning of the melted clocks the hands on the mirror and so on and a certain point that you have also a button where you can desintegrate the painting and this seems very strange and someone could think that you are violating the artwork but as you can see there's a fish that pass by and the ants that go away and so on but if you know the story of this artwork it is really a demonstration very close to the reality because in after the second world war Salvador Dali also painted an either painting which name is the disintegration of the persistence of the memory where he disintegrated all the painting because I was a passionate of nuclear physics and was very impressed by the explosion of the atomic bomb in Hiroshima and Nagasaki and okay I close with a little note about another technologies which is the leap motion which is very similar to the technology I showed you with the Kinect but it is dedicated to tracking joints of the ants so not only you have the joints of the of the body but also all the joints of the ants and this is an experiment the last video and then I closed which use all these technologies together so you have the opulence read an immersive reality in a museum you have also tracking by Kinect sensor and a leap motion to track the joints of the ants is mounted on the top of the opulence so of the display you have on the head and this is an experiment we have done at our center to understand which is the best way to move naturally in an environment and in particular in a museum environment so you can sorry you can use okay just a moment you can use your hand to give a direction pointing your finger in the right direction and you can stop opening your hand then you can or you can move your hand like a lever to control your movement in the space and you can point objects to understand and to have insights you can also walk so you can simulate walking by swinging your arms and moving inside the museums or you can also using a widespread metaphor which is walking in place walking in place to move in the environment so the aim of this of this demonstration and of this experiment is to understand how it is better to give the possibility to move to a user in an immersive environment okay I have finished thank you it's time for a question I saw a few questions regarding my talk and then I want to answer to some of them at least let me let me find them and first of all there is a question a comment I would say by Rimmer Knott I want to answer the comment is I suppose regarding the prototype system we have installed in the Bargello Museum that observes the visitors trying to understand their interests so the question is a nightmare of the surveillance state is the museum really interested in me question mark so well it looks like a nightmare but indeed it's not for a lot of reasons well first of all the system is very privacy respectful because it does not store any information about any image of the face of the people or of the people as the whole it only translates the image of the people in a long string of numbers that is used internally to the system to make calculations and evaluations so the picture of the person is not stored and is not recoverable in any way so it's very privacy respectful in this sense and the direction of the museum was very careful on this topic and but the the question is the museum really interested in me is I believe very interesting because first of all it's not only the viewpoint of the museum that is interesting here of course the museum can get information about what the visitors are interested in in the majority for example but the problem is not the museum viewpoint is that is the visitors viewpoint so thanks to this technology the system can understand what are the individual interests of the visitors and provide to the visitor the information they are really interested in so probably a more appropriate question is is the visitor really interested in having this information in depth information about the artworks he visited that is probably the right viewpoint at least this was the goal of the application and a preliminary answer we have is that not all the visitors that are in the museum are interested in receiving this information particularly we measured that in a period six month period an average rate of 35 percent of people that went to the table to receive the more in-depth information about their artworks they are interesting artworks so how to make people more interested in receiving more information that's a question that we don't know how to answer that's a question to put the museum dimension so there was also a question by a ryan fadkul apologize for the wrong pronunciation and that is are the artworks geolocalized in advance or are they recognized visually by the device i suppose this question is regards the application of the mobile wearable audio visual guide well artworks are not geolocalized so artworks are recognized by the vision system on the wearable device that's the really technological part of the research so marty lot asked the why not use idcon technology i suppose i think that daniele answered very in depth about the limitation of the idcon technology that is when the environment are crowded is crowded well they doesn't work they don't work if there are occlusions they don't work you have to have a device on you that is bluetooth enabled otherwise you don't receive information while these applications are simply led to the user very free to move in the room without making anything with the device just switching it on and so and you receive in this case in our case information in audio directly by the system so then there are there is another question that is worth to be answered isn't the original placement of the artworks interfering of how much popularity they receive from visitors well this is a good question of course the answer is yes yes if the general in this case she also asked have you tried to switch their places unfortunately these are statuettes marble statuettes so very very heavy and cannot be moved easily not for a test at least but of course the results from the system provides useful information to the museum curators to understand whether an artwork is in the right position or not well if they want is more visited they have the hints to move the artworks elsewhere of course this in easier places than the Margello whole is it's very easy to move artworks from one place to the other so this is also useful information that can be obtained from these technologies okay i i believe these are the questions i would like to answer there is another one yeah which one okay there's the i suppose huthana asked that how many people can the system detect in one time and then can we know about the cost of the system well the system has no i suppose you refer to the observation system with fixed cameras well the system has no limitation about the number of detections of course there can be missing detections very far objects or persons usually are not detected but the miss rate is anyway low i provided us some figures in the slides slides will be available to the people attending the webinar by the way so the the answer is there is no limitation on that also for the for the wearable system there is no limitation people there is in the view of the camera is detected if any so about the cost well as i told you in my presentation this system is just a research prototype that is installed for test so it's not on the market and frankly we don't believe we'll be on the market these these systems have to be personalized to the context of application okay so i also saw some question about the topic i covered starting from the ib cons i saw several comments that were pointing out that the technology can be extremely unreliable when especially when there are many objects close together that's true in a context like the one we saw in the example which was the concept of application if in a room i have many many paintings many many objects of interest and so the the distribution is dense and i have many points i want to understand when the user is next to that could be problematic because this id con transmitter can be configured to to like have a range of like a couple of meters but also it can be reduced to a few centimeters but of course the more we reduce this this power of transmission the closest the user has to go in order to actually trigger the information so the problem is always to find the good balance between the density of the of the spots we want to turn into interactive spots and and then the amount of beacon we want to put so basically it has to be a compromise on how many are talking how dense they are and so also i saw another comment from arian about interactive tables so basically the one i talked about are interactive tables that can be purchased and they can work directly so the metaphor of the table is the one that has to be used basically these devices are a big screen like the one you can use for the television but they are augmented with touch and multitouch capabilities so you can either use the table or as a wall and you were asking about tables on the ground so you can make a pavement interactive there are many installations of interactive payment where there is a projection that is reacting to the people passing on the on the surface that can be done basically we can turn every surface into an interactive surface but in this case we have to use cameras so we have to use either something like the microsoft kinect as we saw but also other kind of cameras that can track people movement and and change the projection and the environment accordingly so we we talked about the table because these are the ones that basically can just be purchased and used as they are so and of course it's really interesting to find that they are not the best or the only metaphor of an object where people gather around so i say i use the metaphor on the other way around so if it's a table the table is also a place where people get it around that doesn't mean that other metaphor cannot be used to to gather people next to an interactive system so i think i covered the comment on my side so i can i want to answer to big johnson i hope to pronounce correctly and his question is why would the visitor come to a museum to use the immersive natural interface mic project so the interface i showed them with a combination of kinect oculus lift and lift there's no reason why a user should come to the museum to see to use this virtual installation except for the fact that it's always better to see objects in real life but in some cases it is not possible so for example for a destroyed reconstructor inaccessible even temporarily close the site close the sites you the museum can offer any way an experience of the artwork or the environment and so on you you should you could also use the augmented reality so using mobile apps with some of the easier capabilities to augment the reality but anyway with in this way you have to you have to be in that place you have to see things with your camera so in some cases these it's not possible and a kind of a system of this kind can also be useful to have pre-visits as i said during the presentation so when you want to go first of all enjoying the real life you can have a preview of what you will see and you can arrive prepared to the experience thank you question by maya minoska that this may be all these technologies could be used more as a mean for increasing accessibility for disabled visitors and enhancing their experience yes i i believe this is a very important point one important application application of these technologies to improve the visitors experience and particularly this improvement has value for disabled people so for example you can imagine the importance of the so-called intelligent audio guide for a disabled person that you know is not capable of using his or her hands properly and so the system can help their experience to improve their experience in a museum and make the museum accessible to everybody yeah the answer is yes hi hall and i'm happy to to be here and to talk about the new technologies in inside museums and innovative technologies but i i think it's important point is to know the interest of the users and so my colleagues explain well everything now i i can close at the webinar and i hope you can also see the video recorded in the memo website and there are also the possibility to to give a statistics of the of the webinar so see you next time and i hope it's okay for you it's see you next time next webinar feel free to send questions to us if any thank you very much