OK,那時間先開始吧OK,let's get started我是周杰今天我主要的議題是我會向那個希望黑天就來Deep learning platformare theinvolving of deep learning platform首先我們進行一個Deep learning的一個基本介紹我們現在剩下一個新的技術新的技術例如TechnologyFor exampleAnd language processAnd visual technologyLike facial recognitionVoiced technologyVoiced such as voice recognitionIf you have a smartYou appreciate it at homeYou will be able to communicate with thatWe appreciate itwith this voice recognitionAnd all theseDeep learning platformI do have it's internal deep learning platformwhich is calledPedalPedal那這也是Deep learningI do it's independentlyIt'sDeep learningIt offersTraining for dataData trainingIt also offersModel training for evaluationAnd data experienceWhy do we need a platformAnd a framework for trainingBecause resource visualizationis importantWith this platformWe can improve the resource visualizationIt also helps toProcess with model pipelineAndLike a taskIt offers multipleAnd high securityIt has business supportSupport searchAdvertisementAnd LPs我們就提到了為什麼我們剩下這支平台For our DRPWhy is our DRPneed or connectedBecause first of allIn this dedicated systemThere are many rolesAnd many jobsInvolveAnd all of these need toInvolveIt involvesData engineersWho will interpret dataFromUser andThere is a need toFlect, Analyze, and StoryDataData AnalyzeNeed to train modelsAnd analyzeInvolveWiFi engineersWiFi engineersThey need toCreateThe corresponding partsServicesCompleted processesInvolveSo how toHowWe realizeWe have toWe need toHowThis advantageIn terms ofRegister PublishingImage BuildingDeclineLand balancingDiscriminationIt provides leading supportIn這是我們之前做的事情這是我們之前特別的網上我們也都不分享包括兩方面一個是PAROVSPAROVS是一個CLD我們在PAROVS CLD支持了一個自動貨搜容就是在訓練任務的一個就是訓練任務的一個就是在比如說我們訓練任務中有一個天天訓練然後在這個時候我們可以進行一個就是錯誤的溶湊我們可以 carry out job for這些訓練了解就是PAROVS trainer還有PAROVS master訓練單人這些 are在訓練任務的一個 they can be deployed基於PAROVS CLD我們可以實現 based on this PAROVS operator we can realize job for the tolerant然後現在的話每一組都可以 badge他的主要功能是提供一個我們當然知道PAROVS有提供兩個交付接口PAROVS provides這個 job interface這個 interface如果大家不信那個交付器的話大家就會知道交付器是化的為例然後在有些情況下會進行這個情況然後在這裡我們就進入了聖誼地我們來解決一下我們進入了聖誼地的聖誼如果你們有興趣可以進入今天我們來講一下我們今天來講一下我們今天來講一下我們今天來講一下我們今天來講一下我們今天來講一下請問廣告這三位成本是非常高的因為他們都會提出這個基於一些問題和考慮那這些問題和考慮其實是作為一個普通用戶不是作為一個資質跟q普通信然後他們還提到關於業務上限一個業務要進行那時候他們並不知道所以他们不知道人们学的功能有多少和我们学的功能不多所以如果我们建造家具而变得年齐我们以后需要来做过比较快但是我们需要拿出家具不可以 add money所以我们们需要做因为这些派对并没有选择建议因为有些计划的问题首先是选择保护困境还有没有保护困境而且也有保护困境如果用这些计划去建议建议就会有些保护困境如果用这些计划去建议建议这些计划会被建议建议建议建议也会有一些建议建议建议也会有一些建议建议建议建议建议建议建议也会有些建议建议建议建议建议建议建议所以我們可以聯繫到新的業務模式然後進行新的領域模式像是新的業務模式和新的技術模式我們在平台中也開始採用了下一個模式在我們的平台中也開始採用了下一個模式這個就是我們的平台中的領域模式沒有改變,沒有改變在下一個模式的領域模式但是我們不僅是用我們的利用AEI利用AEI的利用AEI我們也有利用AEI的利用AEI現在CRD我們在昨天有沒有聽到大會的那個言論想說CRD現在是一個分卡他們說了CRD是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們說了CRD現在是一個分卡他們用CRD模式去組織資訊和提供資訊我們在所有的業務下,我們開始採用CRD模型我們開始採用Cervalase的支援上層的層面是SARS我會再次提供資訊為什麼?現在為什麼我們需要Cervalase?Cervalase是一個層面我們可以找到它在網站上Cervalase Connective is a unified pass, pass and serviceIt has different features like automatic building of image, auto scaling, route management, better supportand publishingIt also has this event driven可能大家可能現在可能不是特別好理解但是事實上等你在開始使用以後你會發現這些設計是非常大的Cervalase Connective, you will find that these are valuableThe changes can take place step by stepThis is also a feature from the websiteIt has three parts of the logic, 30 spewSo it provides templatesIt is combined with a series accountIntentual serving, it has auto scalingScale to zero as wellBecause sometimes your application doesn't require traffic or workflowsIn that case, you can just scale your service to zeroIt also offers routing based on IstioIt helps you to realize publishingWhen it comes to eventingIt has decoupled pipelinesIt supports third party sourcesSuch as GitHubAnd it has cross-platformService, such as WSService can be connected to the cloudSo these are the basic introduction of Cervalase ConnectiveAnd why do we need Cervalase Connective?In the past, we set out with container runtime in 2012And then in 2015, around 2015Connective takes placeAnd come into playSo you can see the changes don't happen overnightIt's step by step processSo for Cervalase ConnectiveIt adjusts the issue of vendor lockingSo we can see that Cervalase ProjectIt is not definedOr have the standard API or customer serviceIn Cervalase ConnectiveIt is the commodity ecosystemAnd it also have the couple processIt is really easy to scaleAnd all these are the three reasonsThat we chose Cervalase ConnectiveAnd I just introduce you guysWhy we chose Cervalase ConnectiveAnd now I would like to introduce you guysThe three functions of Cervalase ConnectiveAnd how we use all these three functionsAnd this is the old architectureAnd it is the architectureWhich is the default oneIf you are interested, you can have a look at thatRight-hand side is the build oneAnd Cervalase Connective one buildIs different from oursWhat are the differences?You can understand thatOur build is outsideOf the native systemAnd we will write all the codeAnd we will run the projectIndividuallyAnd we will also write the scriptAnd run it on the machineWe don't use the connectivesTo maintain itOr not use it for the authenticationBut this is the internal systemSo we didn't haveA real high requirement about thatAnd this is the docker buildAnd also have kind of the stripAnd also have the injectOf the environment variablesAnd then we can process thatAnd this is an exampleOf the new kinetic buildYou can have a look at the low partIn the applicationWe can inject build connectivesDown to thatTo generate a secretAnd this secretCan be bind to the siteCut back in the buildAnd we can use the service account nameTo buy our sourceAs well as image generatedAnd then it willGenerate it in the sourceAnd then imageBut we found a problemFor the connective itselfTo this kind of format processIt's a little bit massiveSo laterWe try to use the chapterAnd if you attend the meetings beforeYou will hear the projectTattoo pipeline is a new projectAnd in future it will replaceThe main part of the connectiveAnd the architecture is very intuitiveOn the right hand sideIt uses the service accountTo for the authenticationAnd then your input will be definedIn cooling the GitHubAnd other environmentAfter several stepsYou will have your ownMirror warehouseAnd all these stepsWill be done in the connective clustersAnd when we run itYou can bind to aBottle such as an FSTo share the storageTo go through all these stepsWhich means thatThe build environment will be doneEntirely in the connective environmentSo let's look at the buildWe don't use the build very oftenBecause the functions are really simpleSo now I would like toEllaborate the connective serviceSo I would like to introduce youWhat is the connective serviceSome of you may be familiar with itSome of you may notIf you are familiar with thatYou have a look at the programsOn the right hand sideAnd it will create a CRDAnd in the CRDThere is a service definitionWhich will be dividedTo the root and configurationAnd later I would like toIntroduce you to the configurationTo the laterBut the rootYou would like to realizeSome concepts of the settingFor example, the browserOf the networkAnd the data controlAs well as the strategyAnd the auto-stellarAnd if you are familiarWith the KubernetesI will let you tell youWhat kind of resourcesChanges that will evolveFirst, the combinationAnd that will be theSelf-defined or customized CRDWhat is it triggeredAnd when you triggerThe deployment scheduledBy the KubernetesAs well as the HPAOf the KubernetesWhich is the horizontal portScrollerWhich is the horizontal portScroller 1It is also one of theResourcesInclude the KubernetesVirtual serviceTo create someRouter standardsOr the criteriaAnd just nowI just introducedYou to the configurationAnd rootSo nowI would like to tell youWhat is the routeAnd what is the configurationOn the pictureRight hand sideThe configurationIt just defines a containerAnd also definesWell, actuallyIt has already beenMissingIt can also defineA lot of variablesInstead of a variablePlus the mirrorIt goes to a revisionYes, that's rightMirrorPlusThe configurationIt goes to the revisionAny updateOf the mirrorAny changeOf the configurationIt generatesA new revisionA new versionThat's about the routeThe purpose of the routeIs to introduceThe dataTo the corresponding versionSo all theseThe functionsOf the routeAnd the configurationAnd you can seeThat 100% of dataHas already beenDirectly toThe mirrorVersionNow I would likeTo give you a deep diveOf the serviceIngressIs thatWhen the dataInjectedAnd importedTo the systemWhat will happenAnd it isFace on theActivatorThe activatorWill triggerWill activateA triggerSo it is highlyOn the triggerTo collect the dataAnd if youProject can be scallopedAnd the dataIt can be importedTo the first one thingCartOn the right-hand sideThis kind ofThe architectureIs not availableIn the KubernetesIs also an advantageOf the nativeYou can useThe native nameAnd the percentageTo defineThe dataFor exampleHow manyDataWill be importedTo the revisionOne and how manyData will be importedTo the revisionTwoAnd if you alreadyKnow the pastYou can also useThe pastTo realizeThat as wellBut for the nativeIt can provideA nativeSupportsThe performanceCan beCuredAt the same timeAs wellLet's have a lookAt the IngressIngress alsoHave other featuresAll theseThe tutorialFunctionFor exampleThe autoBalancingAnd routeFace on headerAnd headerAddSo let's have a lookAt the header modelIt canReleaseBy usingThis kindOf theCoup nativeFor exampleWe can publishUsersAnd collectThe feedbacksAnd let's seeThatHow we didThe autoscaleIn the old wayBut regardingTo the autoscaleAnd how we didIn the pastIn the pastWe will provideA smallCurve versionWe will haveOne to two facesAnd we thinkThat is notWhat we useIn the clustersAnd we addMore casesAnd toMake it reachA specific levelAnd to allowThe tagCan be runCan be runAs soon as possibleAnd weAt the very beginningWe try it onThePattern operatorIt was notUniversalBut inTheConnativeIt already providedA better wayFor exampleWe realizeTheConcurrent levelWhich isThe API levelInvisitingIf the concurrencyReached 10Or over 10And theMemoryMemory will beScale to100But if theFace isReviewedAnd lower than 10It will beReviewedAnd scale to1And actuallyFind yourOwnAutoscale systemFor exampleIf you areRecentive toSome parameterFor exampleThe masterNumber will beDiedAnd the numberOf theMasterWill exceedA specific numberYou can triggerThis systemUntilAutoscalingAnd laterWe need toTitleOne challengeWhich isThe co-starIt is alsoFindingIn thisSolutionIt alsoHave some placesWhen it'sSorryFor some instanceBut inOurServiceIf there isNo any dataIt isNeedAny instance at allSo weIntroduceA co-star solutionBut it hasSome problemsWhich isDesignedBy theSpeedOf the co-starBecause that'sWhen there is dataAnd theSystem is notActivatedAnd it will causeA delay of theServiceAnd thisChallenge hasNot beenChallenged properlyCurrentlyWe have severalSolutionsFirst oneIs improvedTheI stillSpeedAnd theSecond oneIs toIn this wayThis kindOf theImageCan beActivatedBetterAnd thereAre someTacticsFor exampleImmediatelyOneThe auto-scalerGet startedFrom theActivatedActivatedThis thingSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo itSo it这些就是16卡的计时我会为它加上计时的计时比如如果是8或41我们会把这些计时其他计时的计时进行这个方式我们可以计时计时的计时但是在计时的计时我们也会用U-PAC计时的计时或者用U-PAC的计时例如如果我们用U-PAC的计时计时的计时计时的计时在计时的计时的计时但是在计时的计时如果是K40或者K8如果是K16有些计时的计时有些计时的计时或是计时的计时在计时的计时另一个是计时的计时而且我们的计时也会终于出现保护我们只能在新中间使用C-2计时的计时我们需要更多选项那么我们的新选项我们可以视之一做一个发展的一种问题其实我们没有当时有限制但其實我們沒有必要再做更多譬如我們加了新機器而使用者需要延遲它需要2GPU而我們只需要2GPU在資訊上這樣的話就沒有一個雖然是一個雖然是一個這樣的話就沒有一個雖然是一個雖然是一個雖然是一個這樣的話就沒有一個雖然是一個這樣的話就沒有一個都沒有一個雖然 isa你能否用一個它不 terra它 它會打開它 它會照  kit交這 它作作譬如Internetic Survey, when the resources first schedule to the code controller, it will convert to the code.And then we also done some converted by ourselves.We will convert the resources to the code and then scale it.And it is a conversion of the resources,but for the network,we will use the previous one in the CDS.When there is a data model,we will still working normally.And to use the IPM generated by the cataclysm and fix it.So even though it is still a centralized network,it has its stability.But for such a model,setting from cloud hub,the resources is scheduled through port,in port fashion.Last but not least,I want to talk about connective eventing.In the interest of time,I will give you brief introduction.If you have learn about some event-based language,you will know that they have a mailbox model.Through this mailbox,you can have communication.For connective eventing,it is also followed such approach.All the components communicate through loosely-coupled event-driven system.It has the declarative binding.It is declarative bindingbetween event generators and event usage service.After this declarative binding,the services are bind and resources are communicated across.I will skip this.It also uses cloud event.Cloud event is like a protocol for language.Through this cloud event,the operations can be done across the platform,across services.Cloud events later will be further deployedor improved in our platform as well.Due to the time constraints,that would be my presentation.Please feel free to raise your hand.So just now you mentioned the serving.It seems like a serving is used mostly for inference.But what about training?Service is not applicable for training at the moment.In the past,we tried to use service for training.But we found that some modelscouldn't match our paddle operationbecause of the job tolerancewhich couldn't be realized in service.And for resource pool,is it local or remote?It's remote.Okay,thank you.Connective has a good advantageis that when I don't use it,I don't need to deploy the resource.So I would assume that whenin your case,users need to deploy the resource themselves.Let me refresh your question first.You mean when we use the resource,we need to define the resource.I know that if you use connective,we don't need to define the resource ourselves.But if you want to deploy resource well,I think you need to do it on your own.Like,users have an ideaabout the resource for a pod.Let me see if I get your question right.You mean like we don't need to setthe memory request or limit,right?To my knowledge,this is an advantage of connective,right?In this community,we are working on this.In the future,a vertical pod auto scalingwill be established.I'm not sure if you heard of it.It's about pod scaling,vertical scaling.It's not delivered.It's realized yet,but we are working on it.Like in the scenario,if you don't thinkthe pod is big enough for you,then you can scale it up verticallyto 8GB,something like that.You mentioned about the GPO.Let me go back to this slide.This one,right?Okay.Like in the past,this is just foryou need to set a taskto card,and then that will bethe fragmented resource.So here,it says the definition.Like when you generate it,when it is generated,it is a gigabyte pod,then it willgenerate this pod,like virtual machinewill be 8GB.So,I mean it is a big pooland it has a lot of resources in it.In this scenario,it's like a pool.Then,this will be user cluster.Of course,we hope that this user cluster,we hope that all users will chooseresource pool over this user cluster.If the pool is used,and because it's isolated,like in the center,you may deploy the resourcein isolation fashion,then you canapply for this 8GB instance.Well,physically,let's say we havethe 12 cards.That's the logic.With this,a user extends resource,it helps you to do this resourcedistribution better.Like,it helps you to definewith this namespace,and after userrun it up,because there's a limit,then they couldn't apply more.And due to the security concerns,they couldn't use,there's no alternativefor them to use.But now with this,in this new scenario,which is not realized yet,or launched yet.It says here,all the resources in the poolare available in a safe manner.Then there's no namespace limitation.And in a way,the pool is infinite.So with this infinite pool,then there's no conflict,right?Yes,that's our hope.That's why we are working on it.Okay,last question.