I am 60 years old this year and have been working in the IT and telecommunications field for 30 years. It was about four years ago that I began to feel a real sense of crisis that AI would dominate everything. At my age, I started looking into research papers about four years ago, based on the work done in AI labs during my graduate school days in the 1980s. Consequently, I had to examine how deep learning papers were implemented using voice and image recognition techniques from 40 years ago because my knowledge was based on AI techniques I had learned 40 years ago. After analyzing papers in this way for some time, I realized that there was indeed a significant gap between the direction of current deep learning papers and the methods I was familiar with. Even the head of AI at a major corporate lab told me that my level was not up to par because I analyzed AI papers based on outdated techniques. Major corporate labs also implemented AI in the same way as I did until about five years ago.

The reason I was mistaken about my approach being correct is that when the Google Transformer model was announced around 2017, I thought it was just one of the many papers from American corporations. I didn’t realize that such performance as ChatGPT could be achieved with the structure of the Google Transformer model. Moreover, as I grasped the know-how of deep learning implementation, I now want to share what I have clearly understood. For American big tech companies to be ambitious about being number one in AI, it is possible because implementing the performance of a massive model like LLM requires a significant investment in AI infrastructure. To dream of being number one in the world with LLM, the first thing needed is to secure the investment cost for AI infrastructure. Without AI infrastructure, trial and error are necessary to achieve performance through the massive learning data of LLM, and to reduce the development period caused by such trial and error, it is natural to secure several times more AI infrastructure, namely cloud deep learning servers, than others.

I have also analyzed hundreds of papers and examined deep learning sources to secure deep learning implementation know-how. However, because my experience is less extensive than that of researchers in major corporate AI labs, it takes me more time, which is why there has been no special interest shown towards me. Sam Altman also talks about receiving investment from Middle Eastern oil money to solidify his position as number one in AI, even though he is currently number one, to maintain that position in the future. Whether it’s Korean or American corporations, securing massive funds like Middle Eastern oil money is indeed everything for being number one in AI.

 

 

Deep Network, a one-person startup specializing in consulting for super-large language models 
E-mail : sayhi7@daum.net 
Representative of a one-person startup / SeokWeon Jang

 

+ Recent posts