 A viewer showed me the project, and I found a SQL bug that both beginner and expert programmers make time and time again. Let's take a look. This is no good. This means that if a million people look at the same URL at the same time, then a lot of the increments are going to be missed. You load the visit count, then you increment it, then you store that visit count. But let's say a million people load it, and then a million people are done loading it. They increment their own variable, so it'll say visited start set 1 and then goes to 2, for example. And then they all store 2. So after the end of that process, if you're super, super unlucky, which is not likely to happen, but if you're super unlucky and a million people all do this in lockstep, the visit count is going to be wrong. Very wrong. Now in the real world, it'll only happen rarely, but it'll still happen. So what you should do is have SQL do the incrementing. So you would say set visited equals to the original visited plus 1. And last visited, you would set to whatever is greater, the current time or the last visit. Well, in this case, it would just be the current time. So you would have SQL use the current time. So I would do the math in here. You need to lock the process? No, no, no. Don't use locks. Have the database do the locks. It'll be a lock in the database, but have the database do it. If you write a lock in here, well, then you can't scale horizontally. As in you can't have multiple machines with your go process running. But if you need a database, well, I mean, you can't really scale the database too well. You can shard. I mean, could explain how putting locks in code could hinder horizontal scaling. Those locks will not apply to all the machines. If you create a mutex in a global variable mutex or something in this program that only applies to this instance of the program, it won't apply to this program running on multiple machines. So each machine, if you scale horizontally, each machine would have its own lock, but there's no global lock to prevent everybody from updating the visitor count analytics. Here's another bug. So a million people generate URLs, whatever at the same time, they all generate random sequences. Two machines could pick the same sequence, right? So you have a safeguard, which says if somebody already took it, then try again, which is good. But if everybody chooses a random one, but there's collisions, they don't necessarily store it by the time you get to this line. They store it way down here. It could be that two people generate the same random number. They see that it's free and then they go to insert it. Now, hopefully you have a database constraint that says that the code needs to be unique. I didn't look at your schema yet. But if you don't, then you're going to end up with multiple rows with the same code. That's a bug. If it does have a unique constraint, then your server is going to randomly crash. Well, not crash, but it's going to give an error. What I would do is instead of doing a find, I would do the create here. I would do the create in a loop. So I'd generate a random ID and create. If the creation failed because the code was in use, we try creating again. So I just do that. But that would only work if your database has a unique constraint. Let's look at your schema. Is this your schema? Yeah. You don't have a unique constraint. No, you do. It's a primary key. So yeah, insert will fail if the code is reused, which is good, but in your case, the failure will end up being exposed to the user, which kind of sucks. It should just try again with a new random code. Want to improve your coding skills? Coming out at twitch.tv.slashdragger. I give free code reviews and lessons every weekday.