 Welcome back everyone to theCUBE's live coverage of Teradata Possible. I'm your host, Rebecca Knight, along with my co-host and analyst, Rob Streche. We are joined by Boris Zipitzker. He is the CEO of Beznext, live from Chicago. Direct from Chicago, I should say. Thank you so much. Yes. Thank you. Thanks for coming on the show, Boris. Thank you, thank you. So I want to start by asking you about the typical problems that customers face on their journey to the cloud. What are some of the things that you see? What are some of the pain points? Well, so many challenges, especially at the age of AI. And I think most of the customers moving in this direction, they have two major concerns. One of them is performance, and the second one is control. So how optimize performance and cost and actually control cost in this environment, which include a lot of different issues. One of them, how to select appropriate cloud platform for my work cloud. Then when platform is selected, how to migrate existing production applications to the cloud on time and within the budget. This is a big challenge. When applications migrated through the cloud, how to organize dynamic capacity management and manage all this workload in the cloud. And typically, people start with one cloud, but very soon they discover actually they have to support hybrid multi-cloud environment. Not all of the cloud, they can move to the cloud. So how to manage this hybrid multi-cloud environment effectively. And I think the most critical problem these days is new applications. So if you're going to develop and deploy new GNNI application in the cloud. So how to estimate what kind of resource, what kind of platform we will need, how much resources, minimum amount of resources should be allocated to support performance goals and what will be our cost. And many other issues, but this is a primary concern for people who are moving and managing their workload in the cloud and especially thinking about new applications, machine learning, AI applications. So what should be done to be sure they will work well? And multi-cloud, under best circumstances, it can be costly, can be hard to observe and have observability. We do a lot of discussions around observability and how you understand and gather the data. Is that what you're helping these customers with is you gather the data, you help them understand which, what's the cost, performance and really dive into that? Yeah, I think the key question here, first of all, is to understand what kind of line of business supported by the cloud and what are the service level goals, performance goals, for example. And in order to achieve this goal, definitely we need observability and organize observability in hybrid, multi-cloud environment challenging. So what do we do? We have agent, extracting data from different platforms in real time. Every hour we collect measurement data and we aggregate all this measurement data by line of business. And for each line of business, it's important to understand what is their performance? What is the resource utilization? What is the data usage? And by the way, how much does it cost? And all of these parameters changing every hour. So we have to understand history, we understand what's going on, but this is information for anomaly detection. So it's important to understand how to discover performance anomalies, resources of anomalies, cost anomalies, find the root causes, okay, and develop tuning recommendations. Yeah, do you see companies really taking applications to multiple clouds or is it to your, were you more saying, hey, we help them recommend, hey, this is the best cloud that you should take that particular for this business unit. Your peripheral business unit, we're going to take that application to this cloud. Is that where you help these customers make those decisions? I think most often customers would like to move line of business to the cloud. It means we have to find what kind of applications belong to this line of business, what kind of databases and tables, because you don't want to move part of your data to one cloud, then another data, another cloud, a lot of issues. So understanding what's going on the line of business and understanding and moving workloads rather than moving specific applications. One of the questions that keeps business leaders up at night is how do you do more with less and that relates to economic uncertainty, to technological change, to this hybrid cloud environment. What is the best next approach and how do you help your clients think through those challenges? So we use pretty sophisticated models. So for cloud selection, we use so-called Q-network models and gradient optimization. And these models use information from observability. So we know profile of each application. And models and optimization layer allow us to compare different options. Let's say for specific platforms, should we scale out or should we scale up? What is the minimum configuration will be sufficient to support service-level goals? So finding the minimum configuration and scaling out, scaling up recommendations, this is the first step. When we have this answer to this question, the next question is, what are the pricing models for this platform? And based on pricing model, like consumption model or other models, we try to figure out what is the best pricing model for this particular workload or this particular cloud? And then having information about minimum configuration and pricing model, go to estimate cost. And we estimate cost after migration to the cloud, but this is only first step. Then we have to take consideration that each workload will be growing. They're going to have more users. They're going to volume of data will be growing. And by the way, they're going to implement new application. For example, GNNi or machine learning applications, all of these things we take consideration to estimate and predict. Minimum configuration and cost, we expect, will be related to supporting business workload on each of the cloud platform. Then we have a mechanism to compare teradata solutions, snowflake, BigQuery, data bricks, et cetera, et cetera. But the goal, first of all, is to use model, evaluate all permutations. By the way, workload management and other possibility. So, evaluate different options and recommend minimum configuration and estimate cost. And then compare cost and make business decision. By the way, what is interesting here, our modeling allows to evaluate different options and set realistic expectations. If you're moving in this direction, what do you expect in terms of performance and what is the cost? After implementing all these recommendations, we have ability to compare actual results with expected and continue with this process. So how is this really, because again, looking at the teradata technology, they have a lot of instrumentation inside the platform. What are you doing to help teradata customers really get to cloud faster? Right. One of the advantages of working with teradata environment, teradata has excellent mechanism of collecting measurement data from race use, HDBQL, other sources. So we automatically extract this measurement data and we support all teradata platforms, including on-premises and vintage enterprise sedation, Vintage Cloud Lake, okay? So by having measurement data, we know what's going on. Then our models use this data to evaluate different options related to Vintage Enterprise Edition. Or right now we start supporting Vintage Cloud Lake, which is a lot of new issues with primary cluster and compute clusters, a lot of new options, exciting opportunities. And our goal to say, well, if you're moving from on-premises to let's say Vintage Enterprise Edition, what is the minimum configuration we need and what will be the cost? If you're going to move to teradata Vintage Cloud Lake, it's a different ballgame, different type of instances and a different way of scaling, et cetera, et cetera. So relying on actual measurement data, characterizing each line of business help us to build realistic model and help customers to estimate performance and cost and develop recommendations. By the way, one of them could be work with management recommendations, what should be priority, et cetera. Can you give an example of this value to teradata customers of this approach? Okay, well I think the value is when people moving to the cloud, managing clouds, so many unknown, okay? A lot of uncertainty and risk of performance and most importantly cost surprises. And we're talking about hybrid multi-cloud environment, we talk about teradata, we talk about snowflake, we talk about BigQuery and each line of business potentially can go in different clouds. So really how to make a decision to be sure it will work well? So modeling allows to estimate every different options and set up realistic expectations. But what is important the next step is really validation. So if you have expectations and we implement a specific solution, then we can compare actual versus expected which always will be a little bit different, okay? And if it's different, the next question is why it's different? What's causing that? And what can be done in order to solve this problem? So value is in reducing uncertainty and reducing the risk of performance surprises and control performance and cost. This is a very important issue and concern for many customers. And it must be, I mean your intellectual property is the ML underneath the hood, the machine learning that you've used. So we use machine learning models for anomaly detections, okay? And for root cause analysis, we use so-called Q-network models and gradient optimization which we, technology which we develop to support evaluation of different platforms and making this decision. So we use different type of models. Yeah, multiple different, that's your intellectual property is the different models applied differently. So right now when you're talking about GNI, okay? The concern is to on a stage of development of this new application to understand exactly, okay. So what do we need? People think sometimes, well, we're going to GNI, solve our problems. But it's actually GNI, it's interface. It's human machine interface and nothing more. And results not necessarily will affect your legal documents or maybe observations in your business. So, customs have to think, well, how to use advantage of general models, large, large models. And at the same time maybe have another mechanism how to build their own knowledge base which is very different from LLM, okay? So how to create their own knowledge base? This is very challenging process. So they have to have people who can translate their data in certain roles and be able to build knowledge base mechanism. Then when people generate questions, it will be another layer. We shall navigate request to LLM, okay? Or maybe go to specific knowledge base based on customer legal documents or some observations. And using their data to improve accuracy of the results of the inquiries. When you were talking about what the models are going to say and what they are predicting for business outcomes, what you said is always different from the actual business outcomes. Not vastly different, but there are some differences. Is it due to this fact that there is a human involved here and that it's not necessarily what the machines can predict all the time, that we are having humans in this regenerative AI? This is good point, because if you think about general AI mechanism, it will provide you one answer to your question, but in reality, it will be multiple answers. It will be nice to have all. As a result of this question, we have several outcomes, most likely this, but maybe different. This is what the GNI is not doing, right? It will provide one answer to one question. And definitely it depends on interpretation of the person asking this type of questions. So our goal using our technology during development and implementation, this type of applications, can link data on relatively small scale when people start using GNI on maybe on third date lake environment using just maybe just one or several compute clusters. Get some experience, collect measurement data and predict if you're going to open GNI applications for general public. So how many people we're going to generate that stuff questions? What should be throughput of the system? And by the way, what is expected response time? It should be sub second, couple seconds, couple minutes. So this is input. Then our goal to evaluate what is the minimum configuration we'll need if you do it on third date environment, or maybe other environments, and what will be the cost. So provide this information before deployment of new application to have realistic expectations. So this is exciting opportunities, but it's a lot of work ahead. It makes total sense. And I think a lot of companies and organizations that we talk to definitely struggle with understanding and taking a business first approach. And it seems like you not only have the technology, but you have the people who can come in and help them understand and apply this properly. Is that a big piece of what you're doing as well? Yes, so our software and our services, we show customers to evaluate different options and develop realistic expectations before it's too late. Boris Zipkiser, thank you so much for coming on theCUBE. This was a really fascinating conversation. Thank you, thank you. I'm Rebecca Knight for Robstretches. Stay tuned for more of theCUBE's live coverage of Teradata possible.