 All right before I get going Who here has heard of salt stack before? Okay, most who here have deployed salt before So no one in here is managing a production deployment of salt. I know I know you're involved in one Okay all right Now The primary point of this presentation is to go over how a lot of these large Web scale companies are using salt and talk specifically about a number of the points That are in there and tied into a number of other technologies that are out there and talk about how they come together to create complete infrastructures Now the problem here that we face is that this is being cut off on the left One of the problems here that we face in modern infrastructures is the fact that we've been blasting around these AAS Phrases like mad over the last few years and the problem is is that we need to move well beyond what they Offer what we need to do is start talking about or talk about more What goes on top of those platforms? Not just how to build them and so a lot of what I'm going to be going over today Has to do with a number of different infrastructures how they work and Well, how they're built Because what it boils down to is that there's a lot of infrastructures out there that are built on infrastructures the service clouds or Platform as a service clouds, but there are a number of hardware infrastructures, which are very Web scale or whatever you want to call it now I'm going to go over just a few examples In here that have to do with a couple of cloud-like companies And so the first one is going to do with how wikimedia Uses salt and a number of other tools for continuous deployment of code and the continuous deployment of Wikipedia itself and Then I'm going to talk a little bit about some of the techniques used by LinkedIn. They are a bare metal shop But they have just shy of 30,000 servers right now and Then lastly I'm going to talk about something that's a little more next generation salt stack the The company behind salt we recently got a contract working on some really interesting virtual machine auto scaling applications and I'd like to talk a little bit about What we're doing with regard to that and also some of the new stuff that we are in the process of Developing and delivering and of course making open source so If we start up here I'm going to start by talking about the problem of continuous code delivery Now there are a lot of ways to tackle the problem of continuous code delivery as I imagine many of you are well aware but Fundamentally, there's a number of core things that need to be considered now among those is that that continuous code delivery mechanism needs to be directly tied into some sort of continuous integration system or Directly tied into revision control systems so that it can be delivered continuously The next is distribution of code out to different locations so that you can have different types of file distribution and fan out mechanisms Service interruption is very important obviously because what you end up looking at is that for extremely large websites and very professional deployments They like to go through great lengths to make sure that you never hit the website During one of these code deployments because they could be very frequent You can have many code deployments in a day You don't want users to hit that website in the middle of a deployment and something wrong to happen Because files were in an intermittent state. So we'll talk a little bit about that and then of course the idea of taking your tools or your mindsets or your Methodologies and turning them into a culture inside of an organization Which is where a lot of these concepts of DevOps for instance really come into their own because it has to do more with communication inside of an organization So I really like the Wikimedia solution for continuous code deployment the person who put this together Ryan Lane very intelligent fellow so I'm going to do a quick overview here and then come back and readdress some of these issues So we start out By the fact that Wikimedia is using Jenkins for testing of their code bases And so they've got tests and builds being executed on Jenkins, which is probably something extremely common in this room Who here is using Jenkins? Okay, okay, that's 62% Now what they do is if we look at a salt topology salt is made So that you have a master or as we were discussing earlier you can have multiple masters and Then those masters are controlling minions Now all of those minions are able to either communicate back up to that master or receive information down from it Now and by information, I mean they can receive instructions or they can download code or Get all everything that they need for say configuration management routines that are highly complex now in the jet in the Wikimedia situation what they're doing is that they have salt master and Minions and I don't want their minions sits the Jenkins server. Okay Every time that Jenkins server Executes or finishes a build which has been tagged in such a way to invoke a code deployment They use salt pier system now. This is the ability for Minions to go send a message up to the master and then request that somebody else in the infrastructure execute on said message and Then return the information back to that minion So the pier system allows them to have a plug into Jenkins that fires off a command to the salt master to say Hey, let the web tier know That we have fresh code for them that information then subsequently goes down to the web tier Tells them. Hey update your code They check into Jenkins. This is of course within a second of that build completing Download that information and deploy it deploy the code fairly simple and straightforward now the next cool thing here is that They want that information about that code deployment To be living somewhere outside of this simple command structure Okay, they want it inside of a redis database that's being used By their internal web front end that tracks everything that's going on at Wikimedia on the insides and So salt has a solution for that called returners All of these commands that we talked about in salt where we have a central unit that splays out lots of instructions Down to its minions happens Asynchronously now the benefit here is that we don't really care about necessarily that message coming directly back up to the source We can redirect the return information to any arbitrary Location the most more commonly that ends up being a database. So in this case Wikimedia uses returners Inside of the salt minion to say once you're done deploying this code Go load the information about this code deployment up into a redis database And then that is instantly available to their internal Their internal web interface So at that point they're able to see that deployment process from start to finish and Have all of that information presented for them and Nobody has to touch the infrastructure to make it happen. It is completely and fully automated all they have to do is Set some tag and get the Jenkins is watching for Okay Now if we move a little forward here Wikimedia was the first really big scale Company to deploy this sort of setup Since then we've seen it done a number of different ways and we've worked with companies to do it a number of different ways And since then a substantial amount of additions have been placed into salt so Again, we started from that original source of a build server Just got build bot and bamboo with some other options up there Trying to think of some other Options those are the ones I've used the most heavily personally But so as soon as those builds complete instead of using this peer system I was talking about since Wikimedia set up their deployment We've added something called the event bus inside of salt now this idea is that Everything in salt is events. It's all entirely event driven And so whenever any action completes events get fired in specific locations so when Jenkins finishes a build it fires an event on salt and Tells on the minion and says hey tell the master that this build event has completed This is very simple to do because salt's event system is open-ended Any external application that has sufficient privileges on that machine? So generally root access on the machine, but privileges can of course be granted in other ways Is able to fire its own events on salt bus? So that event then goes back up to the salt master where we have the reactor now the reactor is something that We're going to talk about a more depth a little later But basically the reactor is listening to these events as they fly by and It's able to subsequently make a decision and say given set event. I'm going to automatically execute a Specific routine and so We are able to configure the reactor to just say if I'm given a deploy event I'm going to fire off a deployment to the systems to which that deploy event has requested a deploy event, okay? Now similarly, there is this problem of how we're going to be distributing files Now when you get into really big infrastructures, this becomes a problem in smaller infrastructure say sub a thousand servers It really isn't that big of a deal. It can be if the files are quite large of course But not as not as regularly as when you're dealing with say Twitter sized infrastructures and so When it comes to code deployment and update Salt has been built with an asynchronous queuing file server now the idea behind this is that by using AMQP Techniques, we've been able to build a system that cues multiple connections back to salt's own file server So that it is optimized for the distribution of files across massive numbers of systems in parallel Now with that we see many infrastructures having many many thousands of servers requesting files from the salt file server simultaneously and It's able to keep up with that and subsequently deliver those files Also salt has the ability to hook directly into package management systems So if your Jenkins system is building Distribution packages if that's the route that you've taken The networks same with plugging into systems like pip or c-pan or Gems or what's the one for a heckle? That's PHP one yeah so we've got a number of systems that allow for Pulling down and managing connections and code deployments when they're packaged as well as getting them directly out of version control software now the last thing Is the question of where the results go for that continuous code deployment? Now you've got a number of options in this one is that we recently released open source, of course the Heylight web interface, which is the UI for salt. It's a little new It's a little raw still needs some features all of it But it is coming up quite quickly and so the information from those code deployments can be displayed in Heylight Or you can do it the Wikimedia way send it to a database and have some other System view it now this returner concept. I was talking about earlier the idea that the salt minion can go in and attach to any external database or interface Allows you to easily configure messages to say that when this message is done Send an email to somebody or when this sorry when this routine is done Populated database. Let's see send an email send an SMS Post information about the successful build on a specific chat server. All of these hooks are already in there Making it again really easy to just say redirect that data to wherever Okay. All right Any questions comments? Yes Down Okay, the question is if if anything is down in that chain of these questions delivered reliably now Inside of salt if a master is down then The minion who is shooting an event up to it holds that event in a queue and Waits for the master to reconnect when the master comes back online, then it fires Okay If the master is sending out a publication or a command down to minions and minions aren't online Those get discarded primarily because you don't want a minion that's been on off-line for two weeks to boot back up And then replay a bunch of old messages So to answer your question Uh-huh. Yes Correct With that said we do have a number of routines that you can that that can be easily configured on those minions To tell them to come back up to speed when they boot up. Okay So one of the next problems that I want to talk about is the classic case of Extremely large infrastructures So extremely large infrastructures is something that we're seeing more and more Pop up and a lot of salt users just even so users who are using us on very small scale really like the fact that We have these sorts of deployments that we work with on a regular basis So they know that if they need to scale and many of them hope they'll need to scale in the future I mean, especially if you're a startup you always want to need to scale in the future then they're assured that the Deployment and system that they're working on is something that can grow with their company and with their deployments And so when we have to start looking at Communicating with many tens of thousands of servers and managing tens of thousands of servers some new problems prop up and so one of my favorites from LinkedIn is About close to maybe a year and a half ago There was the Java leap second bug if you recall leap second. Well, it's a timing bug in Java They were sitting there and they had a meeting Discussing okay, what are we gonna have to do to deal with this? We're gonna have to log into these servers We're gonna have to verify that they're up to date. Are we gonna build shell scripts? They're gonna do it automatically. Do we have SSH systems that can handle this? And they were only a few minutes through the meeting when a guy in the back raises hand and said We're done I run a couple of salt commands We're done. We fixed everything in a few seconds and so Salt itself is made to start having these tools that become very apparent once you get into larger Deployments now. I've got on my laptop here. I am simulating Let's see if Here we go. I Am simulating a somewhat no, it's not a very large deployment because it's just a laptop But what it looks like with salt to have 25 servers attached and so when we run through Adjust this I want to get it cut off All right So we run through and look at what some of the basic functionality of salt is it really rotates around this element of remote execution and The idea of sending out perfectly parallel commands across large volumes of systems and so if we wanted to Say get the network information of all of our systems. We're able to get that information in just a few seconds Another nice thing is that salt still keeps going quite quickly as it scales So one of one of my favorite quotes was from Harvard University when they deployed salt on a 1800 node High-performance computer and they said that the old method that they used Which used the parallel SSH tool to fan out and execute commands on their entire? Supercomputer cluster and then return that information back took them roughly 15 minutes to run Okay, and then they installed salt and now takes them less than five seconds So it becomes very useful. It really opens up a whole new world of what you can do because all of a sudden you have real-time stat gathering You've got real-time modification You can go out and say oh well. We've got a disgruntled employee We need to get rid of his password right now and you can give it a business H keys everywhere Or similarly you can say we've got a security hole that we need to fix in our entire infrastructure right now And you can run that routine that's going to update those packages Everywhere at the same time Similarly it may it's very easy to allow it to stagger those updates So you can say okay update five percent at the at a time and a roll through the entire infrastructure's deployment So when we come back and we start talking about larger infrastructures This is something that becomes Well very important Okay so LinkedIn started using salt and frighteningly early. They were using it in 2011 Before I had deployed it in a real environment It almost makes me regret start, you know, that whole first commit being open thing People were using it But they recently came back and rewrote all of their salt deployments to take advantage of a lot of the new features But some of the main things That I was talking about is really central to how they manage their infrastructure today a Lot of what they do has a lot of what they do today has to do with the ability to gather real-time statistics About what's going on It's their ability to go out and see exactly what version of software Exists on all of their systems and then find those systems that have differences and get that information reported back in a few seconds and Subsequently go out and make modifications So that it's very easy to run a couple of salt commands and then ensure that every bit of code across their infrastructure Of just shy of 30,000 pieces of bare metal is consistent Okay Next we talk about triggering code deployments now They don't have a continuous code deployment that works quite the same way as the ones which I've already shown But they do illustrate a point that's inside of salt. That's really cool So they have code deployment processes that are very very tightly tuned So that they're rolling through servers in a very clean way Now what they do is that they want to make sure that the load balancer has taken a web server out of the mix before they start modifying the code that's on that web server and So salt comes with the system and it called the pre-wrest which was actually designed with LinkedIn's assistance, but it's quite slick So it allows you to define inside of your configuration management systems in salt Which now I'm regretting that I'm not talking about more in detail. I just I like talking about it but it allows you to define something that says Only run this routine if we expect Something to be modified in the future something specific so you're able to define and say Go ahead and tell the load balancer to take me out of this system or shut down the web server Only if I'm about to do a code deployment and Then once the code deployment is done restore that previous state Which means that you can maintain Very complex item potency concepts inside of inside of a deployment Okay It also means that they can fully automate that code deployment process in such a way that again Completely minifies or they hope Nullifies the possibility of continuous code deployment causing any hiccups in service and reliability Now the last thing that I want to mention With respect to LinkedIn's deployments is a concept inside of salt called runners Now one of the problems one of the complaints that we get about salt is That we have lots of terms for things And we find people come back and say oh you've got all these terms for things And we reply and say well that's because we have lots of things that salt can do Now you don't need to learn all the terms in salt You can be up and running hardly knowing anything at all after reading two pages of documentation You'll know how to start using salt for more execution and config management Actually, I think that tutorials like two and a half pages. It's not very big but This next concept of runners is the idea where you're able to set up complex orchestration routines on a salt master that can Deploy systems in complex ways directly from code So you write in Python a runner that is able to hook into the salt system and say All right, we're going to start by executing states The config management system in salt on these servers We're going to stop at certain phases and many of these tools and features that allow you to control this from Python are built into the system and Then the runner is also able to aggregate data returns That are coming back from all these minions make make decisions and then fire events back off to this reactor system So again, you're able to start making very intelligent Automation routines if you need to a very small subset of our users actually have to go this far And they usually only ends up with very complex deployments, but The tools are there a lot of the idea is that you don't need to deal with all of the tools in your toolbox But if you just happen to be in a situation where you need them We don't want that toolbox to be missing something critical okay, and questions comments Laughter like my jokes are just I don't have any today. I'm not doing very well there Okay Now the last thing that I want to talk about What do we started we've got till 540 so okay 20 more minutes The last thing that I want to talk about it starts to bring a lot of these concepts together And this is where we start to talk about more autonomous computing systems Now I mentioned this reactor system the reactor system insult is pretty straightforward It listens to these events better being fired if we have time I'll show you what these events look like and show you a little bit more of the salt command line and Then reacts to those events. It's fairly static. It's very two-dimensional in that regard It gets an event and it does something with it now one of the problems that we've that we've been facing quite recently is That we got a company that wanted to do virtual machine auto scaling, but they wanted a level of flexibility That wasn't available in other tools And so they came to us and asked if this was something which could be created So we went out to some of our contacts who had been building Autonomous submarines for the Navy Because I knew that they were sitting on some Python code That would that to my happiness will work very well assault. It's entirely event-driven That they had been licensed to be for them to open source And so what's going on is that we now have and are bringing in an open sourcing. It's called Ioflo a Logic engine that's used in robotics that is configured by Very simple data structures, so you write some YAML and it configures a robotics engine The entire YAML needed for a submarine mission was generally less than 40 lines So it's extremely terse and extremely Direct and allows us to do all the levels of tree logic and aggregate and and state aggregation That we need to to be able to make highly complex decisions about an infrastructure and then automate forward routines so This new system is something that we're actively developing in response to this virtual machine auto scaling problem and In response to the ideas or concepts behind intelligent automation inside of a data center And so what's going on here is that now we are getting to the point where we can aggregate events across a tier So that it's very easy for us to get all of the events relative to the performance of say a web tier and a database tier and Whatever else it is that's in your infrastructure say your AMQP servers that are passing events around Take that information back and it creates a state inside of this logic engine and Then when a certain state is reached It fires an event off to the reactor to say We're in a bad state of this type and a nice thing here is that it can start making extremely complicated Decisions about what's going on in an infrastructure because we have underlying access to all of the components and a completely free-form logic engine And so we're able to make decisions that say all right So the load is going up on our web tier great If the load is going up on our web tier then what we want to do is see what the disk IO load is on Our web tier and on our databases so that we can determine if this is going up because of high IO weight Or if this is going up because of too many connections and start to make much more informed decisions about what what's going on And then allow that logic engine to do things like say well given our current patterns We're going to go ahead and start deploying virtual machines to the web tier which are of a different shape and size We are able to communicate back to the database to say we're going to expand There's no SQL database to be able to handle this higher load Or we're going to be able to communicate that we want to Build a fail-over-rabbit MQ server that has higher load and then migrate the load over or higher capacity and migrate load over to it And so these complex routines is primarily what we're working towards right now now I Just talked about all those now the simple solution Today is to build that logic into a runner and Python That's going out and pinging the environment every couple of every n seconds Or minutes or whatever you want your Window to be and then making decisions internally But of course we wanted we want something that's substantially more powerful And I'm really excited to say that the IO flow libraries that I'm talking about We did get those open sourced earlier That was last week So those are up and available on github now there. It's really cool stuff Okay, so does anybody have any questions or should I dive into a little bit of a demonstration? Yes Okay, so so the question was in puppet you can have configuration files that are replaced or updated from the master Okay, so salt is able to do all of everything config management wise that any of the other guys can do so yes you can Maintain configuration on the salt master and then communicate that configuration back out to the environment So it really is like M-collective plus puppet And it's entirety together There's a number of There's a number of things worth worth pointing out when we start looking at salt's configuration management systems So I'm just going I've only got some simple ones on here. I wasn't expecting to necessarily dive in here But salt's configuration management system is all based on YAML Or rather I should say it's not based on YAML. It's based on data structures And so all the matters is that we get data structures into the system And so this is a very simple example where we're installing Apache We're making sure that the Apache package is being installed. We're deploying a Configuration file to the wrong location. I know I wrote this quickly What is it at the HTTPD slash cough Dot D slash HTTPD dot com Don't do it every day anymore And then make sure that that service is running so again a very flat simple example now if you want to have Turing logic and advanced variable scoping in there. You can use ginger templating Now that means that you can do for loops in there. It means that you can do If else statements and it has full access directly while rendering these Files into data to execute any of the routines on the command line Inside of the remote execution system on the system that it's building on so that you can very easily shell out or Run built-in routines take that data back and then munch it into your into your system. Okay The other thing that I want to mention is that one of the big differences with salt is that you might you may have noticed that we've got These were these statements like require and watch Now this is a more puppet like thing Where we're ordering who's going to do what when and also what they're depend and building dependency chains And so we're able to do this again very simply by saying that Well, this package is going to be required by the HTTPD service down here The HTTP service is watching this file. So if this file changes Then you're going to restart or reload the service those sorts of constructs. They're all in there now the example I gave of reading into the future. Are we about to deploy code and Making a decision about something which is going to happen is another requisite called pre-rec So we pre-require that something else is going to be modified Now the next nice thing though is that In the absence of requisites or rather I mean with with requisites in there But it's a little crisper with the absence of requisites everything in salt is Executed in the order in which it's defined Which means that these runs will always execute in the exact same order and they will always execute in a highly predictable Indictable and understandable order because they're evaluated from top to bottom in these files And similarly, it's really easy to tie all of these files together By just using that include statement up there Where you come back and you say well, I'm going to install say mod SSL I'm going to include Apache. We're going to make sure that Apache now requires mod SSL when I'm using this this formula as well And similarly you can extend Earlier scopes and systems so you can build hierarchical components So all of all the components are there. I mean without me Blathering on about it for another 20 minutes or well another two days Yeah, all the components are there and the goal of course being to make it simple extremely predictable high-performing and Entirely featureful. I'm sorry. That probably is far more than you are asking from your question instead of me going Yeah, yeah, I can do that Okay, any other questions are you tired have I not been very entertaining Okay, so part of your setup process requires user intervention Okay, okay, you've got a couple of options here One of them is that we've got quite a few salt modules that specifically take care of those interactive types of scenarios So we've got a lot of heavy wrappers around SSH routines We have heavy raptors around generating certificates and keys to make that easier We also have a system inside of salt That makes it very easy to have a central key signing authority or certificate authority on the salt master and have salt generate keys Sign them and securely distribute them for you So I guess in a nutshell We've covered many cases But we don't have say a generic way to work with continued Standard input and output inside of commands in a YAML like fashion We have all those tools code-wise to make it very easy to build those routines built directly into salt Okay That answer your question, okay All right, then I'm gonna cover a couple of concepts really quickly since we've got a few minutes left and You're You're not asking as many questions as I'm generally used to Probably is my fault. I'm sorry. Okay Now inside the salt system. I Showed you whoop Executing a simple command now this test.ping command Again is just sending out a command that says ping to all of these systems and all it does is return true very simple But salt ships with a very substantial library and is self-documenting So we can run salt.sys.doc. We'll pipe this guy over to less here and now We can come through and take a look at All of these different routines and they all have examples on using them Built directly into the remote execution system and so let's see we've had some ACL stuff Apache archive Routines working with network bridges There's more ways to shell out than you can shake a stick at I'm trying to think if there's a Scottish phrase that phrase. I know for that, but I don't okay Let me keep going CP allows you to interface with salt file server directly instead of the fact that it doesn't does all that stuff Automatically inside of the config management system, which is another concept. I should point out inside of salt everything is built in layers Now this idea being that any component down the stack that you want to know how it works Or maybe you have a legitimate need where you need to interact with something lower on the stack of Operations everything is exposed all of the routines that are executed by the configuration management systems inside of salt Are all exposed inside of the remote execution underneath the hood? So if you're wondering exactly how Packages are installed. Sorry, that's how files are managed through there If you're wondering exactly how packages are installed you can come down here and Find the package module which leads me into another point about how this system works It does automatic normalization on every layer between distributions and what their underlying routines are So if we come down here to package There it is All of these package routines are going to smartly know to use The routines from the underlying package manager on the system that they're targeting so you can send a command down it'll know to use yum or Pac-man it's arch Linux or apt-get or brew or Chocolaty on Windows Or anything the other nice thing is that these commands not only Execute on the right systems on the on the target or with the right software on the target systems But they normalize all the data return that comes back Which means that you can execute a single command to say list all of the packages on all of my systems and All of that package data is going to come back in a day in a data structure, which is completely predictable Okay So that again, it's the raw and important data gets normalized now looking at this one more time with data being Normalized and managed if I do something like Package that list packages will see that almost instantly I go out and I gather that live package data About the target systems and it comes back in a data structure Now this is made to be printed as in a very pretty way by default But everything inside of salt is JSON serializable the idea there being that Again, you can take all of this raw data and shove it into anything or Shoves raw data from anything into the salt workflow This is something that comes up a lot especially with these use cases. I've been talking about Because we talked about integrating with Jenkins It's really easy to send data out of salt and translate it into something that Jenkins knows because salt's always just gonna speak Jason Okay but not only that we can do this and Tell it to output that information to the display in Rajasin or in any other format that we would like so Yammel for instance You know, we don't have one for XML because no one's ever wanted one If someone does it will be made But it hasn't happened yet And I make this joke every time and still no one's made one just out of spite or something All right So the last thing that I want to mention inside I'm in here and then we'll be all wrapped up Is a concept inside of salt called targeting So one of the major problems we run into when managing an infrastructure is the question of who is doing what? It seems like a very simple and straightforward problem But for those of you who have deployed especially clouds now because you're just spinning up virtual machines all over the place You realize that that can turn into a bit of a problem Now many existing systems are fairly static in the way they make those determinations They come back and they say okay, we're gonna go by host name You're gonna name everything according to a certain convention or they're going to say that we're going to store this who is what? Construct in some database somewhere Now salt leaves it very open-ended for you to choose how to go about doing that And so the way that that works Is that I've been using just globs here To specify that we're going to be talking to everybody But similarly we can use I know this is contrived I'm not exactly a reg X master, but we can use regular expressions instead of glob matches Which appears I may have found a bug in the development version of salt Or we can do something called grains Grains are bits of static Information about a system that gets generated when the system starts up If you need information about a system that isn't static information Then you run an execution to gather that information because you need to execute a fresh routine to gather it Right if it's static we can store it in memory and we have extremely fast access to it Salt is be built from the ground up to be high performing But similarly we can match systems based on these grains and we can statically set them So we can have a grain and say tell this minion that its role is A web server In its grains and then We can say all of the systems with a grain of Role equals web server are going to do these things Okay And so you're able to do hostname validation Or information validation similarly you can target systems based on their network information Or you can use External databases to gather information about systems if you want to Or you can use what we call compound matching So you can say I want to target all the systems That are running Debian But not the 32 bit ones And only the ones that are in this subnet But not The ones that are running on Lenny And put all that together and then it's going to Just give you the ones that you want So very very fluid and open in the way in which things work All right, I am out of time And I'm I'm sorry. Usually I can get some chuckles out of people But I must be like jet lagged or something But yeah, I'm out of time. Does anybody have any questions before I let you go? Any comments any rotten fruit? Any rude Scottish remarks? Wow, I didn't get one Yes Sure Okay, yeah, and I mean this stuff is all High level because yeah, the reality is is that I mean, well, there's reasons why we have training classes I think there's a lot in there And a lot of this is yeah, just to give an intro But yeah, we can do that Okay, any other sorry any other questions comments, arguments, rebuttals All right I'll let you go then