 Hello, everyone. I'm Stuart McCurley-Nailer from the University of Suffolk and welcome back once again to the Sports Biomechanics lecture series, as always supported by the International Society of Biomechanics in Sports and kindly sponsored by Vicon. In recent weeks we've had some excellent talks on theoretical biomechanics and also applied biomechanics in a range of different sports and there's more to come on both of those areas. We've talked scheduled over the next couple of months due to be announced at the end of this lecture, but for now we're bringing something a little bit different today, focusing more on the data collection and data analysis side of sports biomechanics. And yeah, really with that in mind we've got a talk from Vicon themselves. So I'm going to hand over to Jacques Gay from Vicon who will introduce himself and tell you a little bit more about what's to come. Afternoon all, my name is Jacques and I'm a senior life science application engineer at Vicon. I've been with Vicon for just over 13 years now and part of my role involves traveling the world, installing Vicon systems and training customers like yourself on how to use the Vicon system. However, before joining Vicon I was at the University of Cape Town doing research in cricket biomechanics and I was also managing the biomechanics laboratory on the side. Today I'm going to introduce you to the first of two online lectures where we will introduce you to the motion capture pipeline, commonly referred to as the mocap pipeline. These lectures are aimed at an audience that perhaps haven't spent any time within an optical mocap lab and therefore we hope we can provide you with an insight to what it is and how you can integrate this pipeline into your research projects. But before we get started I would like to thank Stuart for organizing this lecture series and also congrats to the previous presenters on delivering some great lectures. So let's get started. Our aim today is to give an introductory overview of what an optical motion capture pipeline looks like, splitting it into four sections. Today everyone and two. In a few weeks time we will be looking into sections three and four with a live session and we will also be looking beyond the lab. There are many elements to consider within an optical mocap system. Hopefully you all saw this banner going through Twitter. Well here it is again as I still know I know there's a lot of information on this banner but this banner depicts the full life science motion capture pipeline. We will guide you through different aspects of the pipeline through this presentation and especially in the next live session. Let's start then by looking at section one, the hardware and software solution. In this section we will briefly touch on the history behind Vicon and our Vicon has been involved in the world of biomechanics. We will look at Vicon hardware and software solutions and how they made the gold standard of motion capture solutions as described by our users and the market. Vicon is a global company serving the community. Today Vicon have four offices globally. Oxford, Denver, Los Angeles and Auckland housing over 120 staff served by 55 distributors. We have over 5,000 systems live and in use right now. In the late 70s researchers from around the world collaborated to research different methods into tracking movement and gaining 3D point clouds from this movement. Following on from this research Dr Julian Morris founded Oxford Metrics selling Vicon systems in 1984. Using all the analog video cameras for creating 3D reconstructions Dr Morris developed the first optical mocap system. The optical mocap system was needed in the clinical gate setting to capture clinical gate data, deliver qualitative and quantitative results, to track how a patient progress through treatment, to build a patient specific database and to compare this to normal database and then to have increased confidence in potential outcomes of physiotherapy or surgical interventions. This is what was first formed as a Vicon system and what was sold as optical mocap systems since 1984. To close off the history cycle Vicon is now the company name and we have Oxford Metrics as the holding company who are on the aimed stock market. So here is that ban again and we are now focusing on the hardware with the history experience pedigree and known as the gold standard. We have continued to develop mocap over 35 years now let's look at a range of cameras have been developed over time leading to today's range from analog cameras on the far left to digital high-speed and high-resolution optical cameras of today. Cameras are reviewed with specifications such as pixel count, frame rate and field of view, depicting what size of area you want to cover and what size of markers you want to track during the movement and also at what speed you want to capture your data at. Vicon have various models of optical mocap cameras available today and they can be broken down into two distinct families the Vantage family consisting of the market leading 16 megapixel camera to the Vera family small and compact but just as powerful. Here you can see that beyond the normal specs Vicon cameras offer a range of smarts such as onboard inertial sensors for sensing if a camera has been bumped and also thermal sensors for ensuring camera's camera calibration isn't unduly affected by ambient temperatures. This table outlines the detail of each Vicon camera available on the left we have the Vantage family and on the right we have the Vera family. Cameras can be mixed and used in different applications and environments be it in an outdoor setup, a standard lab setting or a custom setup looking at a specific application. As we said cameras are picked based on your application and your capture environment of choice. We have briefly discussed Vicon camera hardware now let's take a look at the captures software options provided by Vicon. Cameras are what most people see as the face of optical motion captures solutions however arguably the software behind it is the most crucial. As Vicon solutions operate across what we categorize as four distinctive markets shown on this image Vicon are unique to the industry in providing a software platform per market. We are talking of course to an audience predominantly here today interested in the life science area which is catered for by Vicon Nexus. The other markets have their own platforms as well namely Shogun for entertainment, Tracker for engineering and Evoke for location-based VR. Please do have a look at them on our website for further details however we are going to focus on our Nexus platform today and in our next live session. Vicon Nexus was first introduced in 2006 and we are currently on version 2.10. We have near 10 000 users of this platform worldwide. We take the feedback from our customers to further the functionality and refine its capabilities. Nexus stands for connection or series of connections linking two or more things or a central focal point. We will see this platform in action during the live mocap session. The Nexus software platform has been designed with the following topics in mind the output, the researcher, the integration options be it digital or analog, the modeling options which we will be looking at in the next section, customizable reporting options either via polygon as we have seen in earlier presentations in the series or via quick reporting within Nexus and finally a simple quick processing option using industry leading algorithms. So to summarize what have we covered so far? Vicon has 35 plus years of experience in the life science market. We make market leading cameras and we make software specific to the market requirements and we provide the full pipeline capability and the most flexible system available on the market today. In the second part of this presentation we will be looking at a few modeling options available within biomechanics. We will be focusing on what is a model, why a biomechanical model is needed, planning a model, modeling platforms, what models are available and we will be comparing two methods of modeling. Firstly, what is a model? A model is a mathematical relation that allows for seeing the behavior of the systems that has been modeled given some known inputs and parameters. Why do we need a biomechanical model? I think the following quote from Dr Scott Delf at Stanford University perfectly sums up the need for a biomechanical model. A biomechanical model allows us to extract mechanistic insights from messy and heterogeneous movement data and to make predictions in domains where data are unavailable. For example, predicting the response to a future surgery. This same principle can be applied to sports biomechanics be it in technique improvement or injury prevention. Now that we know why we need a biomechanical model let's look at how we can apply modeling to human motion. First, we have our markers, our trajectories, our input. This information will be provided by your motion capture system. Secondly, we have any inertial parameters, for example, anthropometrical data. Both these inputs get applied to our biomechanical model. In this example, we have our output looking at compression in the spine. In the next section, we'll be looking at how to plan your model. Here we have a golfer and he is interested in improving his driving distance. So we can ask the question, what is the purpose of the model? The purpose of this model would be to increase the driving distance, of course. One thing that you can look at would be the X factor. If you want to know more about the X factor, have a look at a presentation three in the series by Stuart titled Cricket Batting Biomechanics. The next few questions you will need to ask is, which segments do I need to model? What kinematic parameters do I need to calculate in the model? And what kinetic parameters do I need to calculate in my model? We will then draw a free body diagram. Describe the rigid segments. And we will also describe the segment origins and orientation. And then finally, we will define our marker placement. One answer research question has been defined. The next step is to define a marker set for your model. The following steps describe considerations for creating a marker set. Decide on the number of markers to use and ensure that marker positions are placed in a manner which will allow for meaningful anatomical reference frames to describe the motion. Think about degrees of freedom. Remember the following. A minimum of three markers are required to describe a segment with six degrees of freedom. In this example, we have a four marker cluster on the sacrum. And we can see on the graph the rotation around X, Y and Z with the translation in X. When we look at a hinge joint or a ball and socket joint, we require three markers on the segments surrounding the joint to describe the connection. The next thing to take in mind is to avoid landmarks with excessive skin motion. In this video clip, we can see the thigh marker moving as the foot is stamped. Aim for easily palpable landmarks. In this example, the ankle marker is placed on the lateral malleolus at the distal end of the fibula located on the lateral ankle. Make sure that marker positions are easily repeatable. You will also need to think about anthropometrical data and avoid co-linearity, as you will need to define the plane when creating a segment. Now that we have completed the planning of the model, we need to decide on the application we will use to create our bimetanical model. There are various modeling options available within the motion capture community. I'll be looking at a few of the main ones. The first one is MATLAB. MATLAB is a programming platform designed specifically for engineers and scientists. The heart of MATLAB is the MATLAB language, a matrix-based language allowing the most natural expression of computational mathematics. MATLAB is powerful, popular, highly expandable, and it's got a large community. It is also important to remember that MATLAB is a script-based language. It also has a reasonable learning curve involved. The next option is Python. Python is the general purpose and a high-level programming language. You can use Python for developing your own bimetanical models. PyCGM2 has been developed and written with Python. Python is a powerful, popular, and has a mid-sized community. It is also a script-based language. I'm sure that some of you would have seen on Twitter in the last few days that there's been tweets related to R. A nice option with Python is that there's a direct integration between Python and R, and you can run your R code from within Python as well. Our next option is Visual 3D, created by CMotion. Visual 3D is a biomechanics analysis toolkit. It is a Microsoft Windows application, and it provides the calculations needed for kinematics and kinetics. It includes the latest mathematical techniques for optimizations, signal processing, and filtering, as well as inverse kinematics, complex biomechanical modeling, forces, and forced structures. DeflatBimotec is an enhanced visual programming tool for interactive and immersive application development. Deflat powers the display of virtual scenes alongside your treadmill and motion platforms to enable rapid prototyping for your research. Additional possibilities are also created using integrated scripting to advance sophisticated application development. One nice feature with Deflat is that it also includes the human body model as part of the package. Our next option is LabView by National Instruments. LabView offers a graphical programming approach that helps you visualize every aspect of your application or model. It is powerful and has a mid-sized community. It has a reasonable learning curve. Next on our list is MotionMonitor. MotionMonitor provides real-time solutions to a wide range of applications that involve the study of human motion. MotionMonitor was designed to analyze all aspects of human motion. In sports by mechanics, MotionMonitor will both monitor and enhance performance. Track and animate sports objects for golf, batting, pitching, tennis, bowling, and biking. It also provides biofeedback for exercise to enhance visual responses, eye hand coordination, movement patterns, and uses audio feedback. Our next option is OpenSim. OpenSim provides easy-to-use, extensible software for modeling, simulating, controlling, and analyzing neuromuscular systems. OpenSim is freely available. It's a user-extensible software system that allows users to develop models of muscular skeletal structures and create dynamic simulations of movement. And finally, we have ICON ProCalc. ProCalc is a visual application for creating custom kinematic models, variables, and event calculations using a simple wizard-based system. You can load your C3D files in, create new calculations, and visualize outputs in an integrated 3D workspace. Variables and events are calculated and ProCalc can be written back into the C3D file, and it can also be exported directly to Excel. All the modeling options mentioned has a form of integration available within Nexus, be it a native integration, like the two scripting languages, Python and MATLAB, or via a pipeline operation, and or the Vicon Datastream SDK. Now that we have looked at the different modeling applications available to us, here's a list of a few of the off-the-shelf models you can use within Vicon. It is important to note that certain models can be applied to different activities. You can also find a library of available models on the Vicon website. We are going to look at two different approaches when calculating the hip joint center. The first is the regression method, and this will be used to estimate the location of a virtual joint center point. The second is the functional method, using multiple frames of data where the joint of interest is moving. The regression method. The method we will be looking at here is the Davis method. This method has been implemented into the conventional gate model. The hip joint center links the pelvis and femur, and can be considered as fixed in relation to both. Thus, if we know where the pelvis is, then we can estimate where the hip joint center is. First, we have the placement of the markers to define the pelvis segment. Then, from radiographic studies of healthy adults, Davis was able to estimate how far out, back, and down the hip joint is, from the center of the aces to aces line. The distances posterior, lateral, and distal are calculated on the basis of an aces to aces distance. Here we have the equation as published by Davis in 1991. The first functional example that we are going to look at is the symmetrical center of rotation estimation, or rather, score method. This is an optimization algorithm that uses functional calibration frames between a parent and child segment to estimate the center point of rotation. In this video clip, we can see a subject performing the star arc movement. It is particularly valuable in providing repeatable and accurate hip joint center locations. The score method locates the joint center only. The second example of a functional method is symmetrical axis of rotation analysis, or rather SARA. SARA uses functional calibration frames between a parent and a child segment to estimate the axis of rotation. It is particularly valuable in providing repeatable and accurate knee joint axes. The SARA method locates only the joint axes, and not the joint center. Here we have the functional outputs being displayed in the 3D workspace. The orange box displays the hip joint center being calculated from the score method. The functional joint center estimates hip estimations have been shown to be more repeatable, more accurate, and less susceptible to small marker placement errors. And the second output we can see is the axis of rotation being applied to the knee. Functional joint axis estimations have been shown to be more repeatable, more accurate, less susceptible to small marker placement errors. The next step would be to apply the functional hip joint center, a knee axis of rotation to the rest of your model. If you want to know more, if you want more information on the functional method, please do look at these two papers by airing et al. Now, let's have a quick look at some of the output differences we can see when using functional versus... versus... Yeah, we have the... pelvis. And in the pelvis we can see two markers. These two markers represent as calculated by the Davis method. Next, we have two new markers. You could have seen how the markers have moved upwards. These markers represent an RSA method or X-ray method. Next, we have the Bell method. You can see now the markers have even moved further up in relation to the Davis and the RSA method. And then finally, we have a functional method based on a range of motion. Two other methods commonly being used to calculate the hip joint center is the Harrington method and the Harrow method. They are not displayed in this image. The Harrow method is used by CGM2, created by Fabian Rutherford. And finally, we can see all four hip joint center calculations being displayed at the same time. And you can clearly see that there's a big offset between them. Seeing the hip joint center being calculated in different locations should always make you stop thinking and question, am I using the correct modelling method? The next question we will ask is, how will this affect our model outputs? In this comparison, we are comparing the right hip angles and the right knee angles during a walking trial. The orange line represents the functional method to calculate the hip joint center using score and the knee axis of rotation using SARA. The white line represents the conventional gate model using regression equation for the hip joint center and the coronal plane marker placement for the thigh, which will affect the knee outputs. You can clearly see on these two graphs that there are distinct differences between these two methods. Let's recap today's session. We looked at hardware and software used in motion capture. We looked at a range of cameras which has been developed over time, leading to today's range of vantage and camera cameras. We looked at the fact that cameras are picked based on your application and your environment. We looked at the fact that software is arguably the most crucial part of your motion capture system. Vicon nexus has been developed to streamline data processing down to a single click with customizable pipelines which include auto labeling, gap filling, filtering and modeling tools. We also explored biomechanical modeling. What is modeling? Why do we need it? How we can create a model? And how different methods can affect our model outputs. This brings us to the end of today's session. Please do send any questions to support at vicon.com. And thank you for joining me today. Thank you very much, Shaq. And a huge thank you also to everybody at vicon for their support throughout this series and particularly in putting this lecture together and a special shout out on that note too. And you Ray at vicon whose support throughout this has been fantastic. So thank you. If anybody has any questions for Jack then either send him an email at the address provided during the talk or you can leave a comment in the comment section down below the video where you'll also find links to all of the papers mentioned during the lecture as well as various other relevant papers for motion capture modeling. And now that leaves me to quickly present kind of the coming schedule for talks over the next couple of months. As you can see on screen we've got one lined up every Thursday for the next couple of months some fantastic speakers. And yeah, if you want to stay updated and then please subscribe to the channel. And if you click the bell next to the subscribe you should get notifications whenever things are updated as well. So thank you very much and hopefully see you soon.