딥 네트워크 - 딥러닝 모델 분석/네트웍 통신/카메라 3A 튜닝 분야

To grow like Nvidia in Korea’s fabless Ind. , software development of a software library, similar to the operation software library of the NPU developed by Korea’s fabless, is necessary. 본문

Kernel Porting/Linux

To grow like Nvidia in Korea’s fabless Ind. , software development of a software library, similar to the operation software library of the NPU developed by Korea’s fabless, is necessary.

파란새 2024. 3. 10. 15:20

I have been working in the IT field for 30 years…  Lately, there’s a buzz that Nvidia’s market capitalization is around 2 trillion dollars…”   Nvidia has been manufacturing GPUs such as V100 / A100 / H100 at TSMC in Taiwan… The issue of super-large language models is flooding the media in Korea and abroad… Global companies are desperate to secure the original technology of how to develop something like a super-large model GPT-3.5… In the case of such a GPT-3.5 Model, the detailed model design structure information is confidential, and I know that the model design structure of GPT-3 is also non-disclosed… So, EleutherAI, an open-source version of GPT-3, has learned and released the GPT-J model, and there seem to be quite a few places preparing for development based on this… Because it is possible to understand how GPT-3 has what characteristics when it has what structure with this model. GPT-J and GPT-NeoX are GPT Opensource made by a place called EleutherAI. EleutherAI is an open-source AI organization made by voluntary researchers, engineers, and developers. It is known to make Large Language Models (LLM) open source. I also have a considerable understanding of the detailed structure information of the GPT-3 model necessary for understanding and distilling the model structure of GPT-3.5 or GPT-3, and how to apply and implement this with the TensorFlow API… But after studying this super-large model for about 3 years, I felt that it was necessary to understand how to operate the GPT-3 Model on the cloud GPU server and how to operate the TensorFlow development environment on the A100 / H100 GPU on the cloud GPU server… To implement this, Docker design technology is needed… Nvidia GPU can be operated with Docker technology… To operate Nvidia GPU with Docker design technology, software development of a function that operates it, which is a function of Nvidia library, is necessary. These days, it’s a boom in Korea’s fabless to make NPU, and if this is to grow like Nvidia, software development of a function that operates it, which is a function of Nvidia library (in the case of Korea’s fabless, it is a function of the operation library of NPU (Neural Processing Unit) developed by Korea’s fabless), is necessary to operate Nvidia’s GPU like Nvidia. I have a plan for this, but I haven’t been informed about this yet, so there is no suggestion to jointly identify technical issues, but I hope you will contact me after seeing this blog…

 

Deep Network, a one-person startup specializing in consulting for super-large language models  

E-mail : sayhi7@daum.net    

Representative of a one-person startup /  SeokWeon Jang