 Hello everyone, and welcome back to theCUBE's live ongoing coverage of HPE Discover Barcelona 2023. I'm your host, Rebecca Knight, along with my co-host and analyst, Rob Stretche. This is day two, there's a lot of energy and excitement in the room, I feel it. Yeah, we're getting into the meat of things here as well and getting some great partners is part of it and I'm really excited about this next guest. Yes, yes, he was on the main stage this morning. Carl Harvard, managing director at Tiger Cloud, great name for the company. Well done, congratulations on being up there with Antonio this morning. Yeah, it was fun, it was fun. What really struck me was how you began, you began with such a note of optimism about this moment in time. Yeah, yeah, and I mentioned my gray beard at the time as well. You did, you did. Which I've related to. Yeah, I was relatable, right. So yeah, it is, isn't it? I mean, and I don't know if he caught the lady towards the end who was going to be a judge and decided not to be. She took the other side of the coin as well. I think it's absolutely right, it's a balanced thing. I think we're our moment in time with Genitive AI and AI as a whole. The way technology has evolved to support human creativity, who knows what's on the horizon, and which is great. People's imagination can now be brought to life. But then on the flip side of that, imagination can go everywhere. So I think we have to do it in absolutely the most ethical way we possibly can and support organizations and right organizations to, I guess, develop what they want to develop and qualify those, call them rogue type of organizations that want to get hold of this technology and prevent them from doing wrong things with it. Right, right, so doing AI right, doing AI ethically or responsibly is critical. Tell our viewers a little bit about Tyga Cloud. Okay, so Tyga Cloud is Europe's first, largest and cleanest Genitive AI cloud service provider. So we focus on Genitive AI. Our infrastructure is all GPU based. We don't do any app migration or any big data storage. Purely about helping organizations get access to this technology to train, tune and infer the large language models or the models that they're building for AI purposes. So the way we have gone about building our business so far and the way we'll continue to build it is, as well as being the first and largest in Europe, which is great, and that's a lot of help with HPE here. We're also doing it back to your ethical way of making sure that this compute power, which is really high-density power drawing equipment is housed in the right way. And what I mean by that is data centers that are clean energy driven, so hydroelectric powered, not necessarily nuclear, so I should say renewable energy. So hydroelectric powered predominantly, but also cooled in the same way as well through natural sources. I think I gave the example this morning that our data center in Sweden, just outside the Arctic Circle, is completely naturally cooled. Our left dial data center in Norway is in a disused mine below sea level so it uses the natural violent recalling. So that means that people can unconsciously decide how to train their generative AI because we're transparent about it. The other thing we're transparent about is not just the source of energy, it's the efficiency of the energy we use. So I don't know if you're familiar with PUEs, the power usage effectiveness, so yeah. So I think acceptable levels of 1.4, 1.45, all our data centers are 1.2 or below. Our Bowdoin data center is 1.06, which is, no, that's a good number. We know we can do better, so we want to draw that down. But so a sustainable way of creating your applications with generative AI is absolutely the heart of what we do. There's other things I can keep going on about, if you want to say. Well, I think it's, I mean, today's the first day of COP28 down in Dubai. So I think it's very timely to be talking about this. And I think that when you start to look at the gen AI and all the data and all the crunching and how they have, I mean, people are talking tens of thousands of GPUs to go and train models. What are you seeing from your customers? Because you talked about customers of all different sizes and democratizing AI and kind of get into that a little bit. Yeah, I mean, that's one of the things core values, which I think Tiger shares with HPE and vice versa. Because if you look at the market dynamics, which clearly you have, there is a big demand for this compute capability and organizations are buying it in bulk. And some of those organizations are buying it purely for their own usage, which is fine because they're big hyperscalers and they have to develop there and keep enhancing their products. However, I think that limits the access to startups and enterprises who have great ideas. They've got the right skill sets. They just don't have access to the right compute power to bring it to life. So our whole ethos is to democratize access to this. So buying quite a few GPUs, I mean, the announcement we made today was an additional 330 million investment with HPE. That's on top of what we previously invested. So we're now about 800 million total. So they're crazy big numbers. But we're, which is a nice place for us to be. But what the whole point of us is to say, we'll buy these GPUs, we'll make them live, we'll put them in the right environments, but what we'll do is make these open and accessible. If you're a small startup and you only need access to a couple of H100 machines, you can do that with us. If you're a bigger enterprise and you've got this great model you wish to train, but you only need it for a couple of months, we'll do that as well. Organizations that we've had knock on our door say, we'll take everything you've got for three years. We won't do those deals. We want to actually be able to have, I guess, fueling the European economy from a generative AI point of view to be that part of the ecosystem which again was mentioned this morning, allow people to roll on and roll off as and when they want to. And sorry, second part of your question I think was the types of customers. So we're not focused on specific organizations, you know, who are going to take a lot of GPUs. You know, startups that share the same values as us are absolutely at the heart of our target audience, call it that, because if we can help them succeed, great, we've done part of our job. Likewise, life science companies around drug discovery, universities on research around things like that, perfect customers as well. So yeah, we want to make sure everyone has a fair share. Are you seeing that, I'm sorry, are you seeing that people are coming to you, especially being European based and European headquartered, coming to you and saying, hey, I want to build a European foundational model. So it comes out of Europe versus, hey, we're taking Lama two or we're taking one of these other foundational models. Yeah, yeah, we are, it's more so now. You know, a few months ago, we weren't necessarily seeing that. And I think there's been a few scares from a EU compliance perspective, which affects not just European organizations, but it affects, as you know, global organizations that have European customer bases. Then we've had a lot of people come to us to say, okay, we like what you're doing, you're a European organization, you can offer us true sovereignty, true compliance with its GDPR, et cetera. And we know if anything happens inadvertently, I don't know, a diplomatic disagreement between countries, either side of the Atlantic, and it won't happen. But if it did happen, we can help them tick their compliance box to say they have full control over their data in the region that they're within. So, and I think we're getting more US companies requesting this too, which is interesting. Well, it's an uncertain geopolitical environment. I understand that. One of the things you had just mentioned is having the same kind of values as the startups you work with and how that's a real priority for you. It's also, I would imagine, something that you share with HPE. This idea of a shared value, shared priorities, shared commitment to sustainability and to democratizing AI. Why is that so important for your working relationship and what does that bring to customers? What's the value that customers feel from that? I think it's like, not just what we offer with Gerontive AI, but if you've got that common purpose, that common aim of objective, and it's not a monetary objective. I mean, that is important in some instances, but you say, okay, here's this customer here. They can accelerate what they're doing in genomic research in order to define a cure for cancer or whatever it might be. Then HPE obviously are very interested in helping those. And in partnership with us, we can help that customer in a specific area, but HPE have a breadth and depth of other things they can also bring to the table. So that's great for the customer. If you take the customer-centric view, together, Tiger and HPE, we can sort of offer that customer full-rounded solution or with the intent of helping them succeed in their own objectives, ideally for the benefit of the planet or humankind or whatever it may be. But yeah, I just feel working with these guys for quite some time now, even on a personal level, you can sense those values are true. How do you see the customers, are they transient or do they come and host, they build and host there? How does that work for you? Because if you have a lot of GPUs and if you're doing inference, for instance, maybe you train the model and then do inference, maybe you don't need as many H100s, for instance. Yeah, when you move from the training and tuning to inference, absolutely, it's a slightly, I won't say different, but a config that's required to help organization move to inference. So what we do is, as Tiger, is we have infrastructure as a service. So simple terms, we rent the compute power out and host the customer, et cetera, et cetera. We do have a sister company in our group called Ardent who are managing a lot of data centers, building them out. They will offer the Kolo capabilities as well. So if organizations have access to their own compute power, we can bring them on stream as well and offer them that hybrid model should they want to. So yeah, did I ask your question? Yeah, I guess are people coming in, renting it for a month, two months, training, or is it, hey, they come and, not three-year contract, but they're building out their apps, they're training, then they're tuning the models, then they start doing some inference there and they're re-tuning the models, or what's happening. So that's the, I guess, the beauty of what we try to construct as a European cloud for GNII. So we have these islands of 2032 GPUs, all configured in the same way, located in different locations. Now, if someone wants to train a model for, say, a three- or six-month period, we can do that. When they move to inference, we can give them a slightly different config and move them to a data center probably more suited to for inference. So in our suite, we have tier three data centers, which means high levels of security, minimal downtime, I think 1.6 hours per year of downtime that you're allowed, with also round trip times of 12 milliseconds or less. So when it comes to inferencing, even though it may be less demand on the GPUs, the CPUs come into play though, but you need that balance. So that's when we probably move them to the slightly different area of our cloud to help them suit. But they can pick and mix, that's the beauty. And to your point about contract lengths, now, one, three, six months fine, 12 months, 24 months, 36 months is fine. Our longer-term contracts tend to be with organizations that serve a number of customers themselves. So they reserve a bank of compute power and they actually serve several customers within that. So it still fits within democratizing access. It's almost doing LLMs as a service or something like that. It's a two-degree, yeah, yeah. Absolutely. And we need to get better at that because at the moment in our evolution path, we are very much infrastructure service, but we know we're having to enhance our managed services on top of that and also our software skills. So we're ramping up our skill sets around people that really understand how to construct, advise, even audit LLMs in order that they are configured in the right way, et cetera. The skill sets, and we talk about this a lot, the skill set around that is still so nascent for a lot of organizations. Are you finding that, and this is why you're going down that path, is that... Yeah, it is. I mean, it's like, when any new, and I know JNAI is different, but when any new technology emerges, it merges rapidly. Well, it's Chachi BD's birthday today. Oh, is it? Oh, okay, I didn't know that, I didn't send a card. So, but yeah, the skills tend to follow, don't they? And therefore, there's always this catch up. So we know we need to be ahead of the game, and being an elite partner of NVIDIA helps us because we get future sight of what's coming down the line. So we can see, you know, H200, Grace Hopper, et cetera, advanced notices of what that entails. We get to see what they're developing on their software stacks, same with HPE. So we're sort of slightly ahead of the game, so we can attract the right talent in. But if we don't get the talent in ourselves, which is probably not our true model, we're not going to scale heavily that way, that's where the partnership with HPE comes in, because they have that talent in the world. They've got the talent, right, exactly. Yeah, yeah, yeah. Yeah, excellent. Well, Carl, thank you so much for coming on theCUBE, it was a pleasure having you here. Thank you, glad to be here. Thanks. I'm Rebecca Knight for Rob Stretch, I stay tuned for more of theCUBE's live coverage of HPE Discover Barcelona. You are watching theCUBE, the leader in high-tech enterprise coverage.