LG AI Tops Korea’s Sovereign AI Project as Naver Cloud Fails to Advance, MSIT Confirms

On January 15, 2026, the Ministry of Science and ICT (MSIT) announced the results of the first-stage evaluation for its “Sovereign AI Foundation Model Project.”

 

In the briefing, Second Vice Minister Je-myung Ryu stated that LG AI Research secured the top overall ranking, demonstrating strong performance, while Naver Cloud failed to meet the project’s “sovereignty” requirements and therefore will not advance to the second stage. The ensuing Q&A delved into the specifics: the encoder-weight issue cited as the reason for Naver Cloud’s elimination, plans for an additional call to fill the now-vacant slot, and discussions around the performance gap versus global big-tech models. Based on this evaluation, the government plans to advance three teams first—LG AI Research, Upstage, and SK Telecom—then move quickly to select an additional new team, sustaining competition aimed at propelling Korea into an “AI top three” position.

 

Below is the full Q&A transcript from MSIT’s briefing.

 

I’m curious about the selection criteria and review timing for recruiting one additional team. Will companies that were eliminated in the preliminary screening also have another chance?


“ A situation arose that we did not anticipate, so we will refer back to the project’s original design and complete the administrative procedures as quickly as possible. This second-stage participation opportunity will be open not only to the 10 consortia that were unable to join through the first-stage evaluation, but also to any and all companies capable of newly forming a consortium. We will ensure the notice is posted at the earliest possible time.”

 

Regarding Naver Cloud’s elimination: could you explain in concrete, technical terms which requirements fell short—such as the encoder weights?

 

“The core requirement of a sovereign AI model is that, after initializing the weights, the model conducts training on its own and forms and optimizes those weights through that process. In its technical report, Naver explicitly stated that it used the weights of an existing open model as-is for its video and audio encoders. This does not align with the project’s essential requirement—namely, that the model be designed and trained by the team itself—and many expert evaluators also pointed to this as a technical limitation.”

 

What does the roadmap look like going forward—such as how the additional team will be selected and the schedule for the second evaluation?


“The existing three teams will begin Stage 2 immediately. Once the results are finalized after the 10-day objection period for eliminated participants, we will proceed with an open call right away. To use the leased GPU resources efficiently, we will start the three teams first, while designing the program so that the additionally selected company is also guaranteed the same support conditions—such as the overall project duration and the amount of GPU resources provided.”

 

If you use an encoder, is it completely impossible to rely on an external model? Also, what form will the ‘second-chance’ process take?


“Using external encoders is a common approach during development, but in this case the issue was that the weights were used as-is—in a frozen state—without being updated. On that basis, we determined it would be difficult to recognize the system as a sovereign foundation model. Rather than a ‘second-chance round,’ this is an additional open call in the nature of giving all capable companies another opportunity to fill the vacant slot.”

 

How will you make up for the timing gap between the newly selected team and the existing teams, and what benefits will be provided?


“We will provide the additional participating company with the same total project period and the same government-supported GPUs, data, and other resources. Taking the difference in start dates into account, we will manage things so there is no disadvantage—for example, by flexibly adjusting the Stage 2 completion point or the evaluation window by about one month. We also plan to shorten administrative procedures to minimize the start-date gap itself.”

 

Did a failing threshold (a cutoff for disqualification) exist from the start? And if an eliminated company tries again and joins later, will there be any penalty in the next round?


“This project is not about simple ranking; the goal is to elevate the competitiveness of Korea’s AI companies to a global level through a fiercely competitive environment. For companies that re-enter, the results of Round 1 will have no impact whatsoever on the next round—they will make a fresh start. The intent is to provide a springboard for renewed growth so that all companies can develop through healthy stimulus and competition.”

 

Will you provide clearer guidelines in future open calls or evaluations on how you judge sovereignty—for example, around using external encoders?

 

“Using open source is a global trend, and we actively encourage it. However, the project’s minimum requirement must be upheld: rather than free-riding on someone else’s training experience, teams should directly conduct the training themselves. Based on this evaluation experience, we will make the standards more specific—such as applying differentiated scoring depending on the degree of direct training—so there is no confusion in future development and evaluation processes.”

 

Initially, you mentioned that eliminated operators would receive support for specialized models; why are you now offering another chance to participate in the main program? Won’t there be controversy over favoritism or fairness?


“This is absolutely not about favoring any particular company. The purpose is to allow as many companies as possible to leverage GPUs—a limited national resource—to build technical capabilities. Our goal is to grow the entire AI ecosystem by having participants contribute their成果 back as open source. If there is no willingness from new participants, we are also considering an alternative approach: concentrating those resources more heavily on the three existing companies.”

 

Were the same sovereignty standards applied to teams other than Naver Cloud? Was there any disagreement among experts?

 

“After verifying the technical reports of all teams other than Naver, we confirmed that they satisfied both the direct-design requirement and the weight-training requirement. In the cases of Upstage and SK Telecom, there were criticisms related to how references were mentioned, but experts broadly agreed this was a shortcoming from an ethical-standards perspective—not a technical defect that would determine whether they passed or failed.”

 

Did Naver ask questions in advance about the encoder issue? And after the controversy, what explanation did Naver provide?

 

“There were no advance inquiries from the company, and we assume it made its judgments based on the criteria in the notice and the briefing session. After the controversy, Naver explained that it possesses its own encoders and that the proportion of the encoder used was low; however, the evaluation process had already concluded at that point, so we did not reflect that explanation in the results.”

 

Will the Stage 2 evaluation criteria be the same as Stage 1, or will there be changes?


“The broad framework—benchmarking, expert evaluation, and user evaluation—will remain. However, we plan to specify elements such as differentiated scoring depending on the degree of direct training. We also intend to design the evaluation dynamically, flexibly reflecting new trends—such as Physical AI—in line with the pace of technological change.”

 

Why was the final selection of two teams moved up to the end of this year, and will you disclose the specific scores for companies other than LG?


“With stage-by-stage evaluations, the structure naturally leads to two remaining teams by the end of this year, and we plan to continue support through 2027. We have decided not to disclose scores or rankings in order to prevent direct or indirect harm to the companies. Please focus on the significance that all five teams achieved results that drew global attention in a short period of time.”

 

Compared to global big-tech models, how close are Korea’s models at this point?


“Compared to the target models each company set, some companies—including LG—exceeded performance at 100% or more. There is still a gap when measured against top-tier frontier-class AI, but we are in the process of continually catching up to—and surpassing—a moving target. We will spare no effort in providing support so these models can become a source of pride for the Republic of Korea.”

 

This article was translated from the original that appeared on INVEN.

Sort by:

Comments :0

Insert Image

Add Quotation

Add Translate Suggestion

Language select

Report

CAPTCHA