Cryptopolitan
2026-01-05 15:45:31

DeepSeek’s mHC debut meets skepticism ahead of peer validation

At a time when there are issues with the growing costs of developing and maintaining AI and the limited amount of available hardware, DeepSeek has presented a new plan for developing and scaling artificial intelligence (AI). The Chinese based start-up believes it can create significantly better AI models without necessarily adding more chips and therefore increasing power consumption. Although the proposed mHC concept has garnered significant attention from many researchers of the subject, it is generally considered to still be in the early stages. Further research will be required to determine the benefits of the approach in developing larger AI systems. A technical paper detailing the mHC concept was released last week and is co-authored by Liang Wenfeng, DeepSeek’s founder and CEO. DeepSeek rethinks network design to scale AI One of the main components of the work is a re-evaluation of how information is transferred between the various layers of a multi-layered neural network. Each layer in a neural network passes on a form of processed information to the next layer in the model, creating what has been termed a ‘Residual Learning Network’ ( ResNet ). Developed by Microsoft Research’s Kaiming He and others approximately ten years ago, ResNets provided the fundamental basis to a number of today’s most advanced AI systems. A concept developed by DeepSeek was created after ByteDance introduced Hyper-Connections in 2024. Hyper-Connections allow information to travel multiple routes through a network, rather than just one main path, which can increase the speed of learning and the richness of the experience. However, while they can be beneficial, they can also lead to problematic training occurrences, where models experience training instability or complete failure. According to Song Linqi (City University of Hong Kong), DeepSeek’s research is a progression of an existing idea, a continuation of how DeepSeek looks at other companies’ work, instead of inventing something from the ground up. ResNet is compared to a one-lane expressway while Hyper-Connections resemble a multi-lane expressway; however, Song cautioned that having multiple lanes with no proper rules may lead to more collisions. Professor Guo Song of the Hong Kong University of Science and Technology believes that this research paper may indicate a change in research behaviour for AI research . Instead of continuing to make small modifications to the designs of existing models, he feels that research may evolve towards developing new models based on theoretical constructs. Researchers test mHC but raise practical concerns While there is excitement over the recent milestone reached in the testing of mHC for deep learning, experts have stressed that the research is still not done. The testing provided by DeepSeek only utilized four paths of data when testing models with 27 billion parameters . “The experiments validated models up to 27 billion parameters, but how would it perform on today’s frontier models that are an order of magnitude larger?” Professor Guo Song. The AI models that are available today are larger and typically have hundreds of billions of parameters compared to the 30 billion parameters that were the standard just a few years ago. Guo echoed these sentiments and stated that no one can conclude yet if mHC will be able to perform work at the frontier of AI technology. He also stated that the infrastructure needed for mHC to function may be too advanced for smaller research institutions to use and for companies to utilize on mobile devices. According to Cryptopolitan , DeepSeek’s popularity came from their release of the DeepSeek V3 large language model, and the subsequent release of their DeepSeek-R1 reasoning model only a couple of weeks after. When comparing the results of the models to their competitors during benchmark tests, both models were able to reach or exceed the results of their competitors despite being released using only a fraction of the training data used for the other competing language models. Get $50 free to trade crypto when you sign up to Bybit now

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.