 Welcome to Cypex, Revolutionizing PostgreSQL Application Development. We're joined by Hans Juergen-Schonig, CEO and Technical Lead of Cybertech. We'll discuss Cypex, a user-friendly tool to quickly build PostgreSQL applications. And today you're going to learn about gathering specifications more quickly, building database applications more quickly, how to consolidate hundreds of quote, free roaming Excel sheets and create professional database-driven applications, and moving from Apex to Open SourceBaster. My name is Lindsay Hooper. I'm one of the PostgreSQL conference organizers, and I'll be your moderator for this webinar. A little bit about your speaker. Hans has been a PostgreSQL expert and database specialist since the 90s. He's CEO and Technical Lead of Cybertech, which is one of the database market leaders worldwide, and has served countless customers around the globe since 2000. Additionally, he regularly gives training to PostgreSQL on advanced optimization and performance tuning, PostgreSQL for business intelligence and mass data analysis, PostgreSQL replication, professional and Linux for PostgreSQL DVAs, just to name a few. So with that, I'm going to hand it off to Hans. Take it away. So hello, everyone. Thank you for showing up, and thank you for the wonderful introduction. My name is Hans-Jung Schoenig. Just call me Hans. It's easier for everybody. Today, the topic I've chosen is Cypex, which is about rapid Postgres application development. So thank you, everybody, for showing up and using your time on this one. So let's get started. First, just a brief moment about us. We've been a professional Postgres for over 20 years, so we are absolutely specialized on Postgres and database development. We're doing, you know, postgres development. We got an international team of database experts. We're doing all kinds of data-related services in the area of Postgres. And basically, our services include 24-7 support. We're doing high availability, consulting, performance tuning, migrations, etc. And recently, we've also expanded into the area of data science, artificial intelligence and data mining. We are quite an international company, so our headquarters are here in Austria, but we also have offices in South Africa, Mauritius, Uruguay, Switzerland, Poland, and Estonia. We've more to come very, very soon. So I would say we're quite an international company. And yeah, we're doing projects worldwide for some customers you might know, including Amazon, United Nations, Nokia, Siemens, Audi, Porsche. So basically, we are in many industries and so we eager to show you more if you're interested. So let's get started with the topic, eternal challenges in software development. Let's consider a traditional software development process, right? So you've got a customer and the customer might have a fairly simple requirement. So it could be some simple applications, some forms or something. But the way this works is that the customer is talking to some salesperson, is talking to some key account guy, then its specifications are written, then as a project leader and the team leader. And finally, at the end of the food chain, there is going to be some developer. But at every step of the way, you're going to lose a lot of information, right? So every time information is passed on from one person to the next, there's a lot of information loss. And at the end of the day, this is in many cases what's going to kill the project. It's not that programmers can't get it done. The question is, what is it that they should be able to get done, right? So information loss is a major, major issue. Of course, people invented more clever project management ideas, like SRAM, major development and all kinds of stuff. But the problem still remains that every time you are passing around information, you're losing some. And if you only willing to change things after they've been implemented, you already wasted a lot of time on implementation, until you figured out that that's actually not what you want, right? So it makes no difference if you're doing agile development or traditional project management. The problem persists because the core issue is really information loss between the customer and developers. So the idea is, what if, just imagine for a moment, what if a casual conversation would already bring you very close to a prototype? So what if a conversation with instant feedback, with no information loss, would already provide you with some sort of prototype of what you need? Of course, it's not going to be a 10-minute over-the-counter conversation, but some standard interview process. But what if we had this prototype more instantly? And what if software and prototype development was so cheap that changing doesn't matter? Because in normal software development, when you're talking to the client, you want to take a shortcut, but the client actually wants more. So there's always a conflict of interest between the guy who is doing something for a fixed price and the other guy who just wants as much as possible, there's always a conflict of interest. But what if this whole development process would be so cheap that changes are not so relevant anymore? And somehow help to resolve this conflict of interest, right? So the Cypix way of development, how can we go about it? So the first thing is, do it interactively. So if you're talking to your client, just bring somebody along who is listening into this conversation, which is of course some standardized process, and who would instantly turn this into a Cypix model, which can then be used basically to predict an application. So it's not about having the thing ready instantly, but it's basically being fast enough to predict an application which can then be used in order to gain feedback or which can then be used in order to elaborate, to improve, et cetera. So the way it works is you will always need some sort of data model to store your data. If you have customers and the products and prices and price pipes and price discounts and discount types and whatever that might be, at some point you have to store the data. So at some point you need a data model that's going to work. And secondly, at some point, you want to work with your data. So you need some sort of workflow, which means that this guy is allowed to write offers, the next guy is allowed to review them, the next guy is allowed to sign them, the next guy is allowed to invoice, et cetera, et cetera. So you always need some sort of workflow, which is of course associated with a permission. So person A, it's allowed to sign the contract, person B, it's not allowed to sign the contract, but it's allowed to whatever modify it or whatever it might be. So basically if we take the relational model, if we take the workflow and some sort of security, we should be able to predict a crude application out of this information. And the key challenge is to make all those components so quick, to create them so quickly, that basically you just predict the application, throw it away, predict the new one, throw it away, make some changes, et cetera, et cetera. So the core idea, as I've stated already is, does a customer have addresses or does it have many addresses, can offers be changed, who is allowed to make changes, et cetera, et cetera. So if you're asking all those questions early on in an interactive way, it's a lot better than turning something you don't know into specification, give it to somebody who has no clue, who is going to turn it into code, and then there are 10 people in between who do nothing but obfuscating what the customer really wants, right? At some point you have somebody writing an application about agriculture who has never been on a farm. And the project manager managing this project who has never seen a farm, now all these kinds of issues are going to be there, and that's what's causing software projects to fail, right? So our solution is, have some sort of interview process, create a clear picture in the client's head. So what do I mean by that? The other day, I was talking to a client in Switzerland and I asked them, is the price of pork linear? And they looked at me as if it was a Martian just landed from outer space and said, of course it is. And they asked them, okay, price of pork is linear, right? It was a food company, so price of pork is linear. So is it true that 1,000 pigs are 1,000 times more expensive than one pig? That would be linear, right? 1,000 pigs, 1,000 dollar, one pig, one dollar. And they said, of course not. And they said, you just told me it's linear, they said, well, it is, but now you're telling me that the 1,000 pigs are not 1,000 times more expensive than one pork. So what does the price of pork depend on? And then he said, well, it depends if you know the salesperson, how many you purchase, whatever. And if I just had trusted their specification, if I had trusted their word, and they know everything about slaughterhouses because they're making billions in slaughtering animals every year, they would have gotten one input field where it says price of pork, how many do we want, right? But by having this process more interactive, by showing them directly what that could mean, they're instantly going to tell you, what are you going to do? What are you doing here? This is just one input field, but that's not what we want, right? So that's what I mean by instant feedback, by instantly showing them the prototype, it's going to save you so much money. Because if you have already implemented the linear pork price edit form, and then you figure out later that it actually should be 15 forms because it's so complicated, which it turned out to be, by the way, then everybody's unhappy, it's not just you who is unhappy because you obviously put on the wrong price tag, it's the customer being unhappy because he never got what he wants. And in the meantime, everybody was fighting each other because there was no common ground. So instant feedback is only possible if the customer can really see something very early on. And this is ideally already while you're doing this initial conversation. So what Cypax is basically doing, out of this interview process, we create a data model, we create a little bit of workflow, then we predict the application, we turn around the laptop and say, is that what you mean? So the workflow is going to translate to buttons and dropdowns, et cetera, static tables are mostly going to be tables on the other side. I mean, there's a lot more to that. We're inspecting the relation in the model, we're doing some guesswork just to make this as quickly as possible. So it's quite an intelligent process. At the end of the day, it's just take a relation model at the workflow, predict an application. As I said, it's more complicated, but for the sake of simplicity, that's what it is. So to wrap it up here, we're basically having an interactive process here, which predicts the application. And feedback is key. So if we are really, really honest, if I gave you a thousand pages of perfect specification, none of us would get his head around this stuff, even if it was perfect. Nobody's able to fully comprehend an application described on 1,000 pages. It's just not feasible. So the more natural process is, of course, the interactive one. And that's what we're trying to do, right? In reality, we're dealing not with perfect 1,000 pages, we're dealing with 1,040 pages. In addition to that, nobody's going to be able to wrap his head around it. So that's how it works. So all processes predict the application and then start with the modification. So add some pages here, change the menu entries, turn a table into a pie chart, set a map, adjust some themes, and then at some point you have the final product. And of course, everything here is interactive. So if you add a page up here for just the theme, no problem. That's basically what it is. So what's the end product going to look like? So what we got here is a birthday application, just something simple. So you take a couple of people in your company, you want to record their birthdays and then you draw a pie chart with how old is everybody. You might want some element that shows you the next birthday. You might have some bar chart with birthdays. And what you already see here is an interactive editor which allows you to just add markdown, text, images, buttons, pool and fields. So you can already modify the application the way you want it. And I've combined some screenshots of what this might look like. So if we start on the top, you might have edit forms. You might have buttons. You can change your menu entries. You can have search fields. You can add pages. You can remove pages. Things can be all the refreshing elements. Elements can reference each other. You can have dependencies. You can turn URLs into images. You can add charts. You can have expressions like if this field is higher than the other one, you know, make whatever change to the page, et cetera. So out of this default rendering, then it's really time for the VCV editor to really change it in order to produce something you want. Recently, meaning two weeks ago, we added support for GIS data. So we're now able to display geodata. You can, of course, integrate maps. You can draw on those maps. You can show routes on the maps. You can have, you know, show your gasoline stations, show your whatever. So you can do that also with GIS data. And the way we do it is we just treat GIS data as if it was a standard column type and just have clever display for that. So if you want to see what it means, we've created three simple examples showing basically what you could do here. So this is actually where I live around the corner here on the right-hand side. But you can see here on the top, you can just draw polygons around it. You can zoom in. You can highlight stuff, et cetera. So you can have a graphical editor even for geometric data and integrate it perfectly with your forms, with your charts, et cetera. So we have a clear separation of how you display data. Could be aggregations. Could be some sort of time series or whatever. And the way it's really stored in the table. So you can have adjustments here. We call it object views in case you want to know. So that's basically for the GU data part. So, of course, we got some couple of tutorials available on YouTube. So in case you want to check it out later, there's going to be some tutorials. If you're looking for the link and so on, we can just send it over to you later. No problem. So solution. What Cypix is going to do for you, you can work with smaller teams. Smaller teams is perfect because it's cheaper. It's important because you don't lose so much information because you don't have to talk to so many people who can eventually lose it, which leads to faster development. Everything is cloud-ready. Cypix is shipped as a container. Actually, it's four containers, but it's containers. Then it's low cost, it's solid. Solid means that it's back consistent. So if something is generated, as long as the generator is OK, you can be perfectly sure that your application is working. So in other words, you're not person number 10,000 to implement the calendar. You have calendars, you use them, and that's going to be fine. It should be scalable, and it's easy to learn. That's super important. So Cypix components. We're building up something along those lines here. And as I've told you before, I've been in professional database development for over 20 years. So I've done a lot of personal database development coding, consultancy, literally visited hundreds of clients. And what always bugged me was that I had the feeling that I was spending my life doing the same thing all over again. Create a table to store addresses. Create a table to store currencies. Create a table to store genders. Create a table to store units and conversions. Create a table to store customers. Create tables to store products and product categories and product variations and payment, et cetera. So each of us with some database development has created those things all over and over again. And I can tell you I'm so fed up with it. I don't want to do it anymore. So what we came up with was the idea of readymade components. So if the client is going to say, OK, guys, we need addresses. So why do you want to invent a new address table? It has been done million times before. So just take an existing one. And in case it needs variations, just modify. So that's the key idea here that you basically have pre-made components of things people really need, like, you know, currencies, genders, purchase processes, credit card logs, audit logs, GPS tracks, time series data for sensors, and stuff like that. So it doesn't mean that those components are all perfect. It just means that they are readymade. And it's easier to modify them than to just start from scratch all over again and then have those things finally ending up inconsistent. So any components comes with many advantages. First of all, it's tested, right? Secondly, it's a lot faster. You can get feedback a lot faster. So if you ask the client, you need addresses. And he says, yes. So you say, OK, let's add addresses as a building block. And then you ask him, is that what you want? You said, well, we need one more field for something. That's a lot easier than typing it all up from scratch. It's so much better. Two of bugs, it's more consistent because the addresses are the same everywhere. It's really, really nice. So those are those components. But of course, when we're talking about this kind of tooling, as you can see, this is really coming out of a practical need, right? But there's also some criticism people are throwing at you when you start with this kind of development. And let me try to address some of them. So first of all, a generator cannot do everything. And what I'm saying there is, of course not. So what Typex was used for was made for is forms, dashboards, workflow, API automation, displaying stuff, rapid development. That's going to be perfect for 90% of what you're going to need. So if you're building an application for, let's say, handling a restaurant, 90% of this stuff is going to be stupid input forms, prices, taxes, stocks, dashboards, simple stuff. And 10% maybe are super fancy, layout pages, super efficient, whatever. And in this case, what we're saying is, OK, let's do 90% with the generator. And the remaining 10%, you just add manual pages and do whatever you want. You can have your spinning airplanes and you can have your rotating gummy bears with video animation or whatever you want. So it's not meant to do everything. It's just meant to help you to automate 90% of what you're trying to do, which allows you to focus on what really matters. And that's the remaining 10%, right? If there are any questions, just feel free to ask them. So if there's anything, it's really meant to be 90%, right? Next question, can it scale? Yes, we can. I think that was quite a famous slogan recently. First of all, what Cyphex does is out of the database, what we're doing is we're creating an API and then comes a web server. And at the end of the day, it's rendered. So we've been doing Postgres for 20 years. We know how to scale a Postgres database, right? We certainly know how to scale a web server. And all the rest is happening on client side. So the client is exposed, gets knows the API, it's rendering the UI, it's calling the database functionality, we can scale it, we can make it highly available, and everything. So this is going to be perfect, right? So we can certainly scale it out to a decent size. So that's not going to be a concern. So we can address the deficiencies of full generation because we can always plug in manual code, we can plug in business logic, we can plug in custom pages. We can scale it out because we can load balance and stuff like that. And we know how to scale Postgres. So basically, we can scale every layer here. We don't have to scale the client because we don't care. Cross-platform browser, Nginx can certainly be scaled easily. The same is true for the Node.js backend and for the API part. And of course, Postgres is as it is anyway, so it is no problem at all. The next thing people are usually approaching is security. So security is certainly important, especially in times of data protection, laws, legislation, GDPR, Poppy Act in South Africa and all those regulations. So what we try to do is to make a tool that is secure by default. So we're building on top of standard components which don't try to reinvent the wheel. So also, we have a consistent security model all across the application. So security actually begins on the database level. So we're setting database permissions, which automatically turns it into a secure API because we're only exposing stuff which is actually visible on the database side. The application is also considered to be secure because it's talking to this API that's only going to expose what you are allowed to see. So there's no chance for the client to get anything wrong because you're not seeing it on the API side anyway by default. Then we've designed it in a way that we can work with all kinds of authentications. We can have local database users. So you're logging in as Joe. So logging into the application as Joe. In the database, you're going to be Joe. Then we have the concept of net users. So suppose you have 100 bookkeepers. They're all allowed to do the same thing. So there's a database user called Bookkeeper. And then you can net as many logins as you want to those database users. And just have 100 logins, which basically all end up as a specific database user. And finally, we have support for single sign-on, which means held up, active directory, and whatever you might need. The reason we did that in the first release is that in a big organization, we always have some sort of centralized user management single sign-on or something like that. So we instantly wanted to be ready to integrate with some kind of authentication tooling here. So it's there by default. So we try to be good citizens by providing secure applications. So security at every level. That's the gist of the next slide. So on the database level, we use everything available in the Postgres universe. So we support standard Postgres. We support Postgres TDE, which is encrypted Postgres, which basically stores data on disk in an encrypted way. It also works with Postgres Enterprise Edition, which means our Enterprise Edition. You can use row-level security, table permissions, column permissions. You can use everything Postgres has to offer user management, security barriers, et cetera. Then on the API level, of course, we can again integrate with active directory, have all these authentication layers. And finally, in the front end, by default, everything is SSL encrypted. We try to go for secure design elements, meaning cross-site scripting, et cetera. And then there is automatic library updates. So in case something is lost, we can still be sure that nothing bad can ever happen. So we try to be very focused on this security team. So finally, efficiency matters. So imagine you're in an organization. It has 1,000 employees. And every year, you're just able to save 10 hours per person. I mean, just do the math. If 10,000 hours multiplied by any hourly rate, it's going to be a really significant number. So efficiency matters. So if you can just automate a little bit more, it can really, really pay off big time. But our aim is not to just speed up things a bit. Our aim is basically to tackle those 80%, 90% of software development, which is pure boilerplate, which is pure, repetitive, inefficient, always the same type of development, like addresses and stuff like that. So we really want to tackle those 90% because the really hard thing always has to be written manually. So finally, there are going to be some key learnings here. And what I want to pass on here is really that gathering specifications more quickly matters. And what I'm saying here, gathering specifications more quickly, what they really mean is get customer feedback as soon as you can. Because those guys are the only ones to really know what's going on there. I mean, just going back to my example I had before with pork prices for slaughterhouses in Switzerland, I mean, those guys know everything about meat management. Everything. The problem is they just don't know it the way you need to know it. They know how to cut it and slice it and whatever. They just don't have it in the form you need. So you really have to show them something they can click on or see or inspect or test so that they can release their wisdom to you. Because at the end of the day, it's about information transfer. It's not about writing a button. That's not the problem. It's really about what button, where, what is it supposed to do. Let me give you an example. I was talking to a bank the other day. And the question was, how many customers do you have? And they gave me a number. And they asked them, what's a customer? How do you count them? Nobody could give me an answer. So everybody knew, OK, we've got a million customers. But nobody actually knew what is a customer. If you have two bank accounts, are you one customer? If you just walk into the bank and they have your address and you will open up a bank account next year, are you a customer? If you have five companies, how many customers is that if they all belong to you? So they didn't even have a clue what a customer is. So how can you do software development if you cannot answer those questions in a unified and standard way? Next thing, next key learning is build database applications more quickly, which means faster time to market faster and cheaper development. Let's just consider this COVID situation. I flew to South Africa on 27th of February. I was pushed flying for three weeks in Africa. I was flying through Botswana, Swaziland, Zimbabwe, South Africa. I came back on the 18th and went straight into lockdown. Took the last airplane from Dubai and went straight into solitary confinement. So all of this, within three weeks, the whole world has changed. And imagine you are a government that has to respond with government processes within days or weeks because there's no time. You really want to start with 1,000 pages of specification and blah, blah, blah, blah. No, just get going and modify on the fly. That's what you want. So speed matters. It's not about, oh, it's cheaper or something. Sometimes it's really about necessity. We need to think tomorrow. So it really matters. Next thing is how to consolidate. I call them free roaming Excel sheet. So everybody knows that. So something new pops up in the company. And what they do is they just create the new Excel sheet and put in data in some strange way and cross-reference Excel sheets. And if you move something, everything breaks, et cetera. Everybody knows that. But everybody's also going to understand that the proper database is, of course, better than some Excel sheets that are floating around. So why not just consolidate it, do it in a centralized way? And finally, as Oracle seems to ruin its own market, we want to give people an alternative to Apex so that they can do faster development and just make sure that they can get off Oracle better and more quickly. So that leaves me with any questions. So anything I might be able to answer. I'm seeing one from David. I'm not sure I fully understand it. It says, can you gore the stack again? The stack is going to be Postgres with high availability. So it's basically Postgres with Patroni, virtual IPs and stuff like that. Then we're talking about some database logic, which is enforcing workflows, user management, et cetera. So there's Postgres as a foundation with high availability, meaning Patroni, VIP manager, stuff like that. Then comes some server-side Postgres code, which is fully cloud-ready. So there is no extension that is not available on some public clouds. We just try to go off the shelf as we can. Then it's going to be automatic API generation, which is coming as a container. So there is Node.js and Postgres. And at the end of the day, what you see in the client is basically an application that renders a configuration coming from the server using React. And of course, there's backup tooling and Docker and stuff just to put it into Kubernetes and things like that. So that's the stack. Does it sufficiently answer your question? I hope it does. Let's give everyone a minute to type if they need to type. Otherwise, this was really, really thorough. I hope it is. So as you can see here, there's a lot of GS stuff coming along. And of course, we are extending those components. We're improving more on the security side. We're adding more features to the state machine. We're doing more on the model builder side, et cetera. How much does it cost for four cores? Maybe you can just send me an email. And we can discuss the requirements that I can give you a price. Because I have to look it up on myself because it's going to be a marketing issue. Maybe you can just send me an email here so I can get you a price for that. So any more questions? That is all I'm seeing come through. It was really thorough. Thank you. No, thank you. You're welcome. Everyone now has Hans's email address. So any further questions, you can either send them straight to the Postgres Discord channel or email Hans. With that, I'm going to go ahead. We've ended early today. Give everyone back about 15 minutes of their day. So thank you so, so much, Hans. This was fascinating. This was a joy to watch. And even I learned something. To everybody on the line, thank you so much for joining us as well. Given us a part of your day. So regardless of where you are, if it's morning, afternoon, or evening, I hope you have a great rest of your day. And I hope to see you on future Postgres Conference webinars. So cool. Thank you. Have a nice day.