Hội thảo được tổ chức bởi Supermicro với sự hợp tác từ Nvidia nhằm giới thiệu khái niệm “AI Factory” – trung tâm dữ liệu tối ưu cho AI tạo sinh.
AI Factory là cơ sở hạ tầng chuyên biệt phục vụ huấn luyện, tinh chỉnh và suy luận (inference) cho các mô hình AI quy mô lớn, ví dụ như mô hình ngôn ngữ lớn (LLM).
Các Telco (nhà mạng viễn thông) đang đóng vai trò là nhà cung cấp AI Factory chủ quyền (Sovereign AI Factory) cho các quốc gia nhờ lợi thế có sẵn như trung tâm dữ liệu, kinh nghiệm vận hành mạng phức tạp, và mối quan hệ chặt với chính phủ.
Nvidia định hình chiến lược AI cho Telco xoay quanh 4 trụ cột chính: AI factories, AI cho vận hành Telco, AI-enabled RAN (mạng truy cập vô tuyến), và hạ tầng AI mạng lõi.
Supermicro và Nvidia cùng nhau xây dựng kiến trúc tham chiếu (reference architecture) cho việc triển khai AI factory, giúp rút ngắn “time to first training” – ví dụ như dự án xAI (của Elon Musk) với 100.000 GPU H100 triển khai thành công trong 122 ngày.
Hệ thống được thiết kế theo mô-đun, tối ưu hóa networking, sử dụng làm mát bằng chất lỏng (liquid cooling) và sẵn sàng triển khai theo cụm để rút ngắn thời gian đưa vào vận hành.
Một điểm khác biệt so với cloud truyền thống là AI factory vận hành như một thể thống nhất với workload đồng nhất – điều này cải thiện hiệu suất và tối ưu hóa tài nguyên.
AI tạo sinh mang đến cho Telco cơ hội vượt ra ngoài vai trò kết nối, trở lại với vai trò nhà cung cấp dịch vụ, ví dụ như chatbot công dân, hệ thống chính phủ điện tử, hoặc inference engine phân tán.
Telco có thể cung cấp mô hình ngôn ngữ riêng cho từng quốc gia, hỗ trợ nhiều ngôn ngữ bản địa, đảm bảo an toàn dữ liệu và chủ quyền số.
📌 Hội thảo nêu bật tiềm năng to lớn của AI Factory trong ngành Telco. Với ví dụ triển khai 100.000 GPU trong 122 ngày, Supermicro và Nvidia chứng minh khả năng cung cấp hạ tầng AI chủ quyền giúp các quốc gia phát triển mô hình ngôn ngữ riêng và dịch vụ AI bản địa hóa, từ đó mở ra hướng đi mới trong kinh tế số và chủ quyền dữ liệu.
Dưới đây là 4 trụ cột chính trong chiến lược AI dành cho Telco do Nvidia đề xuất trong hội thảo:
Là trung tâm dữ liệu mới được thiết kế chuyên biệt cho AI tạo sinh như huấn luyện mô hình ngôn ngữ lớn, tinh chỉnh (fine-tuning) và suy luận (inference).
AI Factory hoạt động như một “nhà máy sản xuất trí tuệ” – tạo ra token từ dữ liệu tương tự như nhà máy công nghiệp sản xuất hàng hóa.
Đây là nền tảng để xây dựng AI chủ quyền (Sovereign AI), giúp mỗi quốc gia tự xây dựng mô hình ngôn ngữ phù hợp văn hóa và ngôn ngữ riêng.
Các Telco được định vị là nhà cung cấp AI Factory quốc gia nhờ sẵn hạ tầng, trung tâm dữ liệu và mối quan hệ chính phủ.
Áp dụng AI để tối ưu quản lý mạng, hỗ trợ kỹ thuật và chăm sóc khách hàng.
Telco sử dụng AI assistant, chatbot, và tự động hóa để hỗ trợ tổng đài viên hoặc cung cấp dịch vụ tự phục vụ cho người dùng.
Giúp giảm chi phí vận hành, cải thiện trải nghiệm người dùng và tăng hiệu suất xử lý sự cố mạng.
Đây là ứng dụng AI vào nội bộ doanh nghiệp để tăng hiệu quả và giảm chi phí.
Tích hợp AI trực tiếp vào hạ tầng mạng truy cập vô tuyến, như 5G hoặc 6G.
Các Telco triển khai GPU tại các trạm phát sóng hoặc Edge để xử lý dữ liệu, chạy mô hình AI tại chỗ (on-site inference).
Có thể cung cấp dịch vụ AI tạo sinh trực tiếp từ mạng – ví dụ: inference cho ứng dụng AI di động.
Ví dụ thực tế: Nvidia công bố hợp tác với T-Mobile, Ericsson và NOA về triển khai AI-enabled RAN.
Bao gồm các thành phần phần cứng hỗ trợ AI như DPU (Data Processing Unit), switch tốc độ cao, bảo mật mạng.
Đây là nền móng cốt lõi giúp đảm bảo mạng có khả năng truyền tải dữ liệu lớn, độ trễ thấp và bảo mật cao – điều cần thiết cho AI tạo sinh.
Nvidia cung cấp đầy đủ hệ sinh thái hạ tầng AI để Telco triển khai từ trung tâm dữ liệu đến biên mạng (edge).
Supermicro Webinar: AI Powered Telecom Infrastructure - YouTube
https://www.youtube.com/watch?v=LPvA3ziSVW8
Transcript:
(00:06) hello and welcome to our webinar today thanks so much for listening in I'm Bob Moore with super micro and today we're going to be talking about how AI factories create new opportunities for Telos I think you'll find this to be really interesting I've got a couple of subject matter experts with me today and they're going to help explain all of this and how AI factories are helping uh with data sovereignty so that'll be quite interesting for you thanks for spending a little bit of your time with us today we have a tight tight
(00:35) partnership with Nvidia half for uh some 30 years and so I'm joined uh today by Nvidia as well as super micro and I'm really appreciative of that to get started though we're going to do a little poll here up front so we have an idea of uh the knowledge level of the viewers that are watching us uh today if you'll kindly take this poll and it the question is how familiar are you with AI factories so take a minute while I do some introductions and some housekeeping and uh answer this read through the ABCD uh answers and we'll
(01:10) see what the results look like I'm anxious to see what everybody uh thinks about uh AI factories while you're doing that I'll tell you that we uh have some more information of course on the tabs and on our website so there's a lot of information we've got detailed white papers and data sheets because we're only going to be able to scratch the surface in this webinar so you want I'm sure to get more detail information and so that's available for you through this webinar on our website as well as at Nvidia so uh for more information uh
(01:41) check there also be sure to uh uh uh get your questions ready because as we go through this discussion in this webinar we'll take questions at the end you can submit those uh online here and I'll uh answer those with the experts here toward the end my guests today are Michael kle with super micro Michael we've done a few of these webinars thanks again for joining me good to be with you again so looking forward to this a very topical subject it's gonna be great and a real subject matter expert from Nvidia is Joelle Joel
(02:15) thanks for being here thank you Bob thank you Michael as well super Michel for hosting this chat I'm looking for for the conversation you're back okay well let's just take a look at some of our results we are getting those in and interestingly enough it seems pretty pretty evenly split that's uh maybe a little unusual but uh about a quarter in each category some have already been implementing AI factories uh about a quarter actively exploring and planning and uh the most result actually getting a few more results in here the
(02:49) most the 40% it looks like are they aware aware of AI factories but not started any initiatives yet and then 20% down there we have little or no knowledge so uh Michael what do you think is this about what you expected it's a new topic so maybe that's not unusual what do you think I'm a little surprised that that many people have started already I think we're going to focus a little bit on the the very large training models over here certainly we see that AI is fairly well adopted amongst our customer base already so that piece doesn't surprise
(03:21) me but um no it's good to see that a lot of people already engaged with with AI any thoughts there Joel yeah no it's uh it's interesting to see people are familiar with the term and um I think we'll share a little bit more as we go along today okay well let's dive into it then without further Ado uh AI is you know transforming almost every industry and Telco is no different uh tell us Joel how AI is uh impacting the Telco environment and Reinventing Telos yeah well I I joined nid in 2019 to really help telecommunication companies adopt
(04:01) and drive Ai and and and there is now a lot going on with Telos at AI as you know Telos have very complex networks they have many subscribers so they have multiple opportunities to apply AI to use AI to both both drive down their costs improve customer care and also generate new Revenue opportunities um at Nvidia the way we we think about this is through these four pillars or four areas that you have highlighted here in this slide on on the left is AI factories which is pretty much the topic of this call right this
(04:35) is this is about Telos creating uh a new business as sovereing AI Factory AI Cloud providers to their Market we'll talk much more about it as we go along the second one J for Telco operations as I mentioned you know Telos have very complex networks very large networks they have multiple subscribers uh users right and and this is this this is a great opportunity to apply AI uh one example of this uh that is that is fairly common is how toos are using AI check Bots and AI assistance to help both their call center operators but
(05:10) also to create self- service capabilities basically toos can use AI for their own business to improve their own business to improve their customer care that is what this second area is about the third one AI enabled Rand AI enable radio access network uh we actually had a a key announcement yesterday on this on this space together with T-Mobile Ericson and no and I encourage all to check it up uh but basically the idea behind AI enabled Rand is like Telos can deploy accelerated compute gpus on their Network ads on their sites and that can
(05:47) be used not only to host their communication stack 5G 6G but then also to create new Revenue opportunities for example as provide of providers of inference engines inference services for generative apps right that mobile users are using so so this is like a developing space and it's pretty fascinating and encourage you to check the announcement we had yesterday with T-Mobile Ericson and NOA on on this one finally the one on the right AI infrastructure uh obviously the core business of tkos are is the network
(06:21) right and we have an extensive portfolio of of Network Solutions including dpus de data packet units switches and and other security capabilities and that's what this is about this fourth pillar but today we'll really be talking about AI factories and then I'm looking forward for that okay and it sounds like some of our viewers are a little bit familiar with that and maybe we can do a webinar on some of these other areas but let's dive into the AI factories what are AI factories Joel yeah AI Factory AI Factory I think is a is a very catchy
(06:55) term right uh but basically uh this is a new type of data center that is designed and optimized for generative AI uh it's it's basically a blueprint across uh data center compute network storage uh design and optimize for generative AI large language models workflows like for example fation model training fine tuning rack retrievable AED generation and and inference right so it's a it's a data center into and data center design optimized for those types of workflows and why Factory right or AI Factory because you know AI factories are the
(07:35) foundation of the new AI Industrial Revolution and as in the in the the previous uh Industrial Revolution this is also based on a factory but in this case is an AI Factory that is able to manufacture intelligence in in the form of tokens right so that's why we call it AI Factory it's a cool way to portray Michael anything to add to that so I think you know if we look at the difference between AI factories and you traditional Cloud Model a traditional cloud model is much more heterogeneous in both its workload and in the number
(08:12) of server types so you really have you know multi-tenant running a variety of workloads across um you different time scales it might just be doing scale out there might be running a short-term demand if you think of a factory what does a factory do it produces a you know a single product a single entity that's what it does all day long and so when we think of an AI Factory we really bring in that homogeneity and scale you know to do these large AI models requires a lot of scale you need a dedicated unit to do that and these factories are
(08:40) designed to do training and infuencing you know very well everything is optimized around AI especially the network and some you can do some Network optimizations that are a little different the entire Factory or entity operates as a system operates as a single entity you know it may still be a little multi-tenanted but those tenants will generally all be run running some generative AI workload on the system so that's the real difference today we'll really see you know a focus around a large installation designed to do you
(09:10) know one thing very well generative AI okay and Joel back to you why do we need so why is Sovereign AI so important why do we need Sovereign data Sovereign AI factories to provide that and how are why are Telos becoming a factory providers actually all right so I think uh why do we need Sovereign AI factories U yeah we are all experiencing how generative AI Chang the way we we live and work right I think we are all seeing what this new technology can do um and and and generative AI is really becoming the new Global business platform if you
(09:48) will like like the internet the cloud or even electricity before um and every country Every Nation they realize that uh it is important for their National ecosystem to be able to participate into this new economy into this new platform and in order to do that they need they need infrastructure right and the infrastructure that enables that are AI factories so every country wants to create AI factories to enable their National ecosystem including you know the universities the government itself the startups the
(10:21) national startups and Enterprises to really create and use this new technology and participate on this new economy so that's pretty much uh what is behind is this idea of sovereign AI Factory providers right are the companies that are enabling uh the country the national ecosystem to participate into this new economy and and I think what is fascinating and and and is the main topic of of this webinar is like why Telos are becoming AI Factory providers right why Telos well if you think about it uh pretty much Telos are the trusted
(10:58) National infrastructure provider for their markets already right they provide essential communication Services uh so there is a natural fit there they are the trusted National entities uh so that's one reason why Telos are becoming the soing a factory providers on their markets Telos also happen to have data centers right they they have lots of data centers because that's how their car networking business is built right so we are seeing Telos retrofitting and modernizing those data centers to accommodate a factories now right and
(11:31) that gives them an advantage a time to Market Advantage uh data centers are a premium now uh Telos also have experience in managing complex large uh infrastructures right every G 5G 6G and so on and so forth large large amounts of Investments managing and operating complex Network so they do have all the the the essential capabilities the foundations to be able to step into this new business and it's pretty fascinating that in in the last two years or so right the number of toos that we have seen is stepping into that investing and
(12:07) and now they're all like coming up with their production factory networks or infrastructure right like including soft Bank in Japan or singt Singapore itl Malaysia indosat in Indonesia right telenor in the nordics in Europe Swiss call so this is really happening and happening at at fast space so this is pretty remarkable okay got it U Michael and and uh house generative AI bringing the Telco new Services opportunities if you think over the past few years you know Telos have sort of lamented and in some way accepted the
(12:46) role that they connectivity provider but if you really think back to the old analog days of a telephone Network you know Ericson defined this as the most you know complex machine in in the world these very large inst installation and the talers were you know really in the business of services they were voice service provider Connecting People to People or to services and in effect the network was the service and or more specifically it was a platform over which you could run a number of related Services you know direct to end users or
(13:16) intelligent Network Services like credit card calling or voicemail or various other features generative AI is a you know opportunity to do that again it brings the Talos an opportunity to create another platform that can offer a host of related services and as Joel said you telas are well exe suited to execute on this they used to run in very large installations and this is a very Network intensive business people will speak to their phones to get Services they will dial in to get advice so it's one that talks to the Talos Legacy and history
(13:50) and is one that they are in a good position to do and so I think generative AI as that new platform and service is a great opportunity for Talos to move up the value chain and and get back into the into the end user Services business got it okay and how about in deploying this then Joel what are some of the challenges that uh occur in building an AI Factory for the Telos or the Telos might experience and and more appropriately how can Nvidia and super micro help those Telos in implementing Sovereign AI factories right to talk
(14:24) about a factur as this is something trivial right but it's it's a remark in complex technology you know just a few years ago what we we can do today was just not possible right because of the the amount of compute that is needed to to to do this and generate intelligence in the form of tokens what is behind this AI factors is the ability to ingest pretty much all of the word digital information right which is measuring hundreds of trillions of gigabytes and then be able to compress all that knowledge in a model right in a model
(15:00) that when presented with a question or with a request showcases intelligence and he is able to to to to react but that is that is pretty remarkable and what is behind is is is a complex infrastructure that needs to be designed in a certain way right uh to make that feasible so so that is like the technical and Technology challenge right this is like Cutting Edge stuff U and obviously this is what Nidia has been doing right for the last decade several years building largest AI factories AI super computers right and
(15:35) and what we have done to help advanc this is we basically codify our our experience and knowledge and intellectual properties on on building this this large AI factores into reference architectures right that basically becomes the blueprint on on how our partners can go and build this inure to offer uh AI training AI inference AI fine tuning capabilities to the to the market right to tenants uh these reference architectures are available to our partners of course and and because this is this is standardized and tested and proven right this reduces
(16:10) what do you say like time for first training time for first training right how quick you can build his infrastructure and do the training that you want to do right and and and then put this to work basically and and obviously the way we do this is work with Partners like super micro right that is able to to to to build and then have the technology to fulfill these reference architectures across the multiple components that are needed right so work very closely to basically build these reference architectures make
(16:40) it real with with super micro nice and of course we've really prided oursel on as I mentioned up front about that collaboration so Michael can you talk a little bit to the collaboration and the implementation then collaboration with Nvidia and implementation maybe some of the rack scale solutions that we've got so if you start of going back looking at the architecture you know the architecture for nof factory this application fairly well defined there's a number of key subsystems the compute block the the interconnect for for that
(17:14) compute block you're going to have thousands of of servers connected together a high performance storage connect because you're trying to access your data very quickly and then a number of supporting networks um in and around it the key thing is this thing is also designed to operate as I said as sort of a single entity right it's an organism that's tightly coupled and and works very well together and so what we've done J mentioned a reference architecture super micro has been working very closely with Nvidia right
(17:43) from the outset in the space and we have essentially instantiated that Invidia reference architecture into a number of building blocks for the computer networking in the storage that allows super micro to very quickly bring a complete installation um to Market which we'll touch on in a few moments perfect okay we appreciate that that's that's great and uh how about Joel any technical insights or Michael technical insights on what's different from traditional Cloud AI infrastructure yeah if you look at um if you look at this in one B yep right
(18:22) there yeah so if you look at taking that you know architecture and mapping it into you know today's current generation system and they said the way you know super micro approached this is you you need to build this thing out at scale so we subdivided it into a number of subsystems and created building blocks out of those so we create these scalable units for example we put 32 HDX systems together and that becomes a self-defined block that's that's testable um we interface that storage through a high performance flash system because there's
(18:55) some tight latency requirements in video has on how quickly the gpus must be able to to access the storage and then of course you need to bring all this data that you're constantly Gathering into the network so that's more a long-term storage but a little bit that's underrecognized sometimes when she's looking at the servers is the whole networking piece and this is essentially critical to to the performance of of the product and so when the ethernet East West flows within the data center can be optimized for AI workloads and
(19:26) particularly want to use a lot of RMA workloads in their infid architecture you can run that over infin band as a cluster architecture or you can run it over Rocky over an Ethernet network but even the ethernet Network itself you use a well optimized configuration so again unlike a traditional data center where you've got small working nodes this whole thing operates as one big tightly coupled tightly inter interconnected system and that's how we've instantiated the reference design into actually a Deployable rapidly Deployable unit
(19:59) now it's not only the scalable units are not only applicable to the design you need to build these things and and deploy them if you're going to you know ship out 100,000 gpus on site one things our customers you know measure is how quickly can they get going because they spending a lot of capital to obtain these units what they ultimately interested in is how quickly can they start to you know utilize them and build them so by building these these scalable units we offer essentially also can preest them the factory they had a
(20:29) modular size that each one can be prevalidated pre-tested and then chipped out and deployed sort of as a block and that's really what makes them you know happen very quickly okay good Joel anything to add there no I think Michael cover they talked about scale the scale that is needed and and the network behind to make it happen right and that's because if you think about the workloads right what people is doing with this infrastructure one is like foundational model training right and in foundational model training you literally needs
(21:04) thousands of gpus or tens of thousands of gpus operating together right as a as a as a multi-gpu workload uh because of the parallelism and all the techniques that you need to be able to converge those training jobs in a feasible amount of time right and because of that because of that you need this this these networks these Fabrics particularly the compute fabric they used West fabric that Michael highlighted designing in a certain way right there is no blocking that doesn't starve the processing and and and and and the comput processing so
(21:39) all of that is integral part of this reference architecture right across compute Network and historage is really thought from the ground up to enable that type of workload that is the most computational intensive uh type of workl that you have ever seen okay gotta so we're getting a little more technical obviously I can see from the diagram that we have here so I'd remind our viewers that this might be the time to put some questions in uh to the chat or pose your questions on the webinar uh as we start to get a
(22:10) little closer toward uh the end of this webinar but it looks like we've got a great solution here uh and you talked about quick relatively quick delivery Michael um Joe what about delivery the challenges in the field and that type of thing yeah you you want uh to delivery in build and have this infrastructure up and running as fast as you can right to to make your Investments uh productive one of the metrics that you use to to think through this is what we say like ttft time to First training time to First training how how quickly you can
(22:45) build it and do your first training job on this infrastructure that is one of the the key kpis here right and obviously the reference architectures that that that that we provide are are are key there right to help you to have the receipts and the reference architecture to go do that but then there is also uh Services components Services components that are needed how we really help like build this infrastructure through the life cycle of you know design implementation operations hand over and finally the first training right so all
(23:18) of this is is really tightly combined and and part of the process that we drive with these Partners um on this question right the and the challenges from the field I think we we had a very good example uh together with super Marco right from from xai uh Alamos AI company right they build an AI Factory uh the size of 100,000 h100 uh GPU system one of the largest right that that there is in the world one one 100,000 h100 gpus a very large AI Factory implementation all built in a single RDMA fabric right to enable this this foundational model
(24:01) training the the largest that the world has seen and all of that the first training on that case was done 122 days after they start off the process right because of all of those things that we have right the reference architectures the services and everything else and the work that we do super micro so we compress that time and enables and makes that possible that is that is certainly a important achievement okay super and that sounds like a pretty good delivery time we may get some questions on that Michael are you you
(24:35) agree with that example that Joel was just talking about yeah fully agree I think that project speaks to to two things once you know it fully validated what we've been speaking about today I know people get concerned about the complexity of these AI factories when they look at the scale you know this really shows that reference architecture that implementation of the architecture into physical hardware and scalable units and then actually taking it out to the field with the the model that we just showed on the previous slide and
(25:04) you know it all really comes together but I think another key point to make is really the close cooperation between Nidan super micros Professional Services team out in the field you know that they work together throughout and once you bring up the deploy the units you really got to commission it and bring it up and test you know every network connection because in an AI system one bad network connection can really slow your entire cluster again the this thing works as one large single unit so it's that teamw
(25:32) working that went throughout the design so right from the original design the equipment development out to to the deployment but it shows you know we can turn these things up really quickly I think people look at is there something I should be concerned about how am I going to get this done we can we can bring you Revenue in a very short time frame super and on that anything else unique about that project Jo mentioned that you know this was also a um liquid cool base design one of the other aspects is you know the wiring interconnect people
(26:06) don't think about networking as much in the data center you often see pictures with sort of a rat nest of of ethernet cables because people are as I said those things are are constantly changing that doesn't work in an AI Factory you really have to design your networking cable to be highly optimized as short as possible routed as cleanly as possible traceable in case something goes so we build a lot of that into the into the design of those cable units they come pre-wired pre pre-configured but the other big thing is you know these
(26:38) units generate a lot of heat they consume a fair amount of power and so really to get that scale there's a direct you know total cost of ownership Advantage if you can reduce the footprint inside the data center you need you know much smaller building and we're talking about football size buildings over here to hold all these systems so you have a much smaller foot frint bring the service together reduces the length of the CA in you think that's trivial but you know these servers can have 8 or 12 cables coming out of them
(27:08) and shortening those cables by 50% quickly quickly adds up it also means the comp complexity and cost of the transceivers you need add up so you really want to shrink your server footprint down to as small as possible well then you have to get the heat out and this is where liquid cooling has this great Advantage if you liquid cool these devices you can take that heat out and you can have a dense server configuration Now liquid cooling at this scale hasn't really been done before there are obviously liquid cooling's
(27:35) been around for a decade or more but they've typically been specialized installations and now we want to make this you know standard practice and super micro has done that we actually designed many of the liquid cool components ourself we have our own cdus we have our own liquid cool directory chip connectors we built our own cooling tower which by the way is a nonin vapora of cooling tower so you're not constantly losing water as you as you cooling so this whole system is designed and because we designed it we also take
(28:06) ownership for it now one problem with data centers today where they use liquid cooling they put a bunch of servers in and then some third party is providing the cooling and when things start to go wrong you know it's the old you know one throat to choke so with us with super micro building this as an integral part of our AI Factory design we take ownership of all the servers as well as the cing as well as the infrastructure and that's a huge um benefit to our customers that is remarkable and and very much of note that we take an end to
(28:36) end solution all the way cooling tower to the chip and you can see the super micro cooling towers out there into the chiller unit that's in in the uh rack and and right into the chip so very unique end to end comprehensive solution that we're quite proud of okay um this has been fascinating for me and and educational hopefully it has been for our viewers as well any concluding comments wrap up we'll start with Joel and then Michael you might add to that all right no thank you thank you Bob and Michael for for hosting this this
(29:06) conversation as we've been discussing a AI presents presents a a great new opportunity for toos right to both like use it to improve their their operations and their customer care but also to generate new revenues and rendance Sal to create more value um so far if you think about the AI space and generative AI space a lot of the efforts have been have been done by the AI Pioneers right these companies that are training this very large foundational model and creating these new amazing apps right that we are all using and enjoying um
(29:40) what these Pioneers have done is to create this very popular and successful applications that are now reaching hundreds of millions of users right there are more and more people using these apps and what is behind then is is a new infrastructure that needs to be built right you need an infrastructure that is able to serve those apps that is able to basically provide tokens to those apps and obviously this can be a meaningful new opportunities for calos right that have like their Regional data centers their Edge data centers and can
(30:11) turn those into inference engines that become providers of tokens for generative AI apps that their consumers are using right so this is something we are seeing right and they believe that uh there is a bright future ahead for Telos that are able to to go go do that super Michael I agree you know generative AI I think is a once- in a generation opportunity for Telecom operators to really get into a new vertical a new class of services at a scale and impact they used to also these operators are used to dealing with governments and we
(30:48) see that many of the applications around SN AI may have some national or government opportunities related to them and you know really while the scale and scope made for complex you know within video and Colossus we have demonstrated that we can um we have modulized standardized these implementations and we can quickly turn up an entire Data Center and start ear in Revenue that's perfect okay um let's get to the questions because I do see a few that have come in here uh from Alexander uh he's asking what is the cooling liquid
(31:21) made of apparently our direct com comprehensive solution Peak some interest there is a tap water and and I can say it's not I I know that much it's treated water so that's it's non-corrosive Michael anything you want to add to that yeah we I can't remember the specific chemical off top of my head but essentially it's like the antifreeze in your car it's got a glycol mix inside it that's do for Corrosion Protection um inside the system so it's not straight straight tap water it is treated and it is clean but it is also not some special
(31:52) esoteric fluid as you get in some other types of liquid cooling it's not um um you know these and we might I know water oh sorry didn't mean to catch off I know we're a little overtime but I wanted to catch us we maybe one or two more Javier is asking um Joel maybe you can catch this in the apps and use cases that that Telco can sell any uh comments there yeah we are seeing so Telos are are building this infrastructure and providing these capabilities basically to their to the National ecosystem in the countries that they operate right in
(32:28) many cases we're seeing Telos taking a few steps ahead and and help the nation create their own foundational model for their own language right leveraging their culture their dialects and and you know all the information database that the country has right so creating a foundational model that then becomes the foundation for different services that the country can create uh for example chatbot to help citizens to do whatever they need to do right R driver licenses or anything that they need to engage with the government uh you know beyond
(33:02) that also Telos are providing basically infrastructure access to software right and and infrastructure gpus uh to enable startups and Enterprises to basically adopt generative Ai and be part of this new this new so you know there's so many makes me think that we need to have more webinars on this to explain some of those other parts of AI and Telco that you were highlighting at the very beginning and I think we are out of time there was another question what percent of Telos do we think are deploying AI factories any real quick answers on that
(33:35) Michael or Joel saw that in the poll early on it's pretty representative um you Joel showed a slide where we've got a pretty high you know more than a dozen probably around that have started or have interest already we expect that that will grow a little bit see there's another question why s AI you know one point we didn't bring out why why Talos and Nations is you a lot of these LMS have been trained in English they're very entric but one of the opportunities for governments is to offer services for example inquiring about government
(34:07) services your car license your Social Security your pension or anything else that that governments are providing and if you do that in a different country you really need to train that in its own local language a lot of that data is going to it's got high privacy and security requirements around it so when we talk about Sovereign it's really about you know that data and that information to it citizen C needs to be contained in that specific country and it needs to be delivered in a format that the citizens they are used to in
(34:35) their own natural language or multiple national language depend some country and that's why these factories need to be in country and then we lead into who's the best people to do that and as you said Tas are very well placed to be that to be that developer that's a key point and and thanks to Elena so much for that question really great question about why it's so important data sovereignty is key now and will be in the future okay with that I've taken us a little bit uh overtime but I really appreciate everybody uh watching and and
(35:04) hanging with us uh for more questions again more detail check the website thanks for the questions that you submitted thanks for watching and for all of us now I'll say so long and signing off thank you again bye bye thank you everybody