At the Game Developers Conference 2026 (GDC 2026), Tencent showcased its three core AI solutions to the world: ‘MagicDawn,’ ‘VISVISE,’ and ‘ACE.’
According to Tencent, the most decisive shift in its AI technology this year, compared to last year, is the transition from the use of standalone tools to the integration of AI across the production pipeline.

Beyond simply generating images or code, Tencent focused on delivering stable, high-quality outlets under complex game industrialization standards. Through on-site demonstrations, Tencent emphasized AI’s fundamental reshaping of game development productivity as its core innovation.
‘VISVISE’ automates the entire process from skeletal rigging to animation generation, freeing up creative time for artists. ‘MagicDawn’ overcomes mobile computing limitations to deliver high-performance global illumination across platforms, and ‘ACE’ leverages real-time decision-making capabilities in highly competitive environments to set new security standards.
This effectively conveyed that AI is no longer just a concept, but a technology deeply integrated into every stage of planning, development, and operations.
Tencent views AI as a paradigm shift that surpasses the impact of the mobile internet. The core philosophy is that AI strengthens the foundation of efficiency, upon which human creativity can be fully realized. By freeing developers from repetitive tasks, it enables them to focus on the final 10 percent of artistic refinement and creative breakthroughs.
To realize this vision, Tencent implemented a horizontal innovation mechanism in resource allocation, allowing any young talent to propose and validate AI exploration projects with access to GPU computing power and data. Additionally, the company strengthened organizational collaboration by tasking the Middle Platform, its central technology infrastructure, with building foundation models and managing a closed data loop (where data circulates internally without external leakage). This structure allows project teams to focus entirely on specific gameplay implementation, significantly accelerating the deployment of new projects.
For practical and legal risk management, Tencent strictly adheres to the principles of a closed data loop and human-machine collaboration. To ensure quality consistency, the company utilizes its own commercially validated, high-quality 3D assets for training rather than relying on public data. This approach ensures outputs meet industry standards in topology and texture precision. AI-generated outputs are treated as first drafts, with full editability for secondary human refinement and fine-tuning. To mitigate copyright and ethical risks at the source, Tencent complies with all relevant laws and regulations while operating advanced internal data refinement and auditing systems.
Tencent expects these deep-tech innovations to dramatically transform the end-user experience. ‘VISVISE’ increases content scale, allowing players to encounter NPCs with richer movements and experience more personalized interactions. ‘MagicDawn’ enables PC-level dynamic lighting and spatial audio on mobile devices, effectively breaking down visual barriers between devices. ‘ACE’ ensures a fair and clean gaming environment.
Tencent stated that as AI-native gameplay becomes fully realized, players will experience a truly open world that continuously evolves. Its long-term goal is to build a technology infrastructure that spans the entire game lifecycle. By opening up its AI capabilities as foundational infrastructure for the global gaming ecosystem, Tencent aims to lead the standardization of high-quality production tools and support the industry’s overall advancement.
The following from a question-and-answer session with Tencent’s core development team regarding each solution.
MagicDawn: Rendering Innovation That Breaks Hardware Limits
How do technologies such as cloud rendering or AI-based neural-rendering dynamic GI overcome device-spec limitations to deliver high-quality visuals?
‘MagicDawn’ overcomes rendering limitations caused by differences in hardware specifications through distributed cloud baking and AI neural rendering technologies. By leveraging cloud-based distributed GPU clusters, it can achieve baking efficiency dozens of times higher than conventional methods. For instance, a baking task that previously took eight hours can now be shortened to just 12 minutes.
During actual gameplay, the user’s device only needs to load pre-baked lighting data, which eliminates the burden of complex real-time lighting calculations. Furthermore, MagicDawn applies proprietary AI compression technology to efficiently control package size while maintaining data precision, further reducing device load. Consequently, whether a user is on a high-end console, a low-end smartphone, or an ultra-light laptop, they can experience consistent cinematic-quality lighting effects without being restricted by local GPU performance. This innovation completely shatters the traditional limitation where device performance dictates the visual ceiling.
What measurable improvements in visual quality and performance does MagicDawn deliver?
Based on the same Unity engine scene shown in the demonstration video, compared to traditional real-time lighting solutions, adopting MagicDawn’s lighting baking solution can boost the rendering frame rate by more than 1.5 times. It also maintains lower computational overhead and a more stable frame rate during map loading and area transitions.
Meanwhile, through proprietary visual enhancement tools for preventing light leaks and optimizing seams, MagicDawn resolves common visual artifacts found in conventional lighting workflows. This further optimizes lighting precision and overall texture quality, resulting in visuals that are both more stable and higher in quality.
Can MagicDawn be quickly deployed in live-service games without changing existing workflows?
MagicDawn has been successfully implemented in both upcoming and live-service titles such as Honor of Kings: World, Wuthering Waves, and Roco Kingdom: World. This demonstrates its full compatibility with existing workflows, as developer convenience was a core objective from the initial design phase.
Tencent has built a standardized cross-engine toolchain that integrates seamlessly with mainstream engines such as Unity and Unreal Engine, allowing for immediate application without restructuring existing art, design, or technical pipelines. For example, in Unity, high-quality lighting baking features can be activated instantly via a plugin. For live-service games, MagicDawn also supports low-intrusion upgrades, helping ensure business continuity.
Compared with the native lighting and audio features of major commercial engines, what is MagicDawn’s decisive differentiator?
MagicDawn’s key strength lies in its highly efficient, highly stable distributed cloud baking. By leveraging cloud computing, it boosts baking efficiency by dozens of times, avoids local baking crashes, and significantly accelerates project iteration. Another major advantage is its high-quality dynamic lighting. It utilizes unbiased path tracing and PRT dynamic GI to fully support dynamic scenes such as time-of-day transitions. Automated light-leak prevention further enhances visual quality.
AI-driven package optimization is another strength. Its proprietary AI lighting compression technology effectively controls package size while maintaining data precision, helping address limitations in native engine capabilities. Finally, its full-pipeline automated toolchain spanning both lighting and audio eliminates tedious manual operations, allowing teams to focus on creative work.
How much time does the automated spatial audio solution, such as probe generation, save for creative teams?
MagicDawn’s spatial audio solution addresses one of the most persistent pain points in modern open-world audio production. For large open worlds spanning 10 km or more, audio designers traditionally had to spend weeks manually generating acoustic probes and configuring materials. The process is not only repetitive and tedious, but also highly prone to error.
MagicDawn automates this entire process, from probe generation to runtime adaptation, completely removing this inefficient stage. As a result, audio teams can conserve time and energy and focus instead on core creative work such as sound design and atmosphere building.
ACE (Anti-Cheat Expert): AI Security for a Flawless Game Ecosystem
Have there been meaningful numerical improvements in identifying gold-farming accounts and detecting illicit users since the adoption of AI?
The integration of AI has brought significant and measurable performance improvements. In terms of in-game cheat detection, AI models use deep learning to analyze players’ decision-making intent and determine the likelihood that a given behavior is normal or abnormal. This has increased cheat detection accuracy in genres such as MOBAs by approximately 80%. For gold-farming accounts, ACE’s economic security control solution can more precisely identify cross-regional illicit-account activity and abnormal trading networks.
In 2024 alone, the Tencent Security Team penalized 187.65 million gold-farming accounts, a 75.5% increase from the previous year. By leaving highly complex cheating behaviors with nowhere to hide while drastically reducing manual monitoring costs, ACE has established an automated response system that delivers both high coverage and high accuracy.
What is ACE’s technical moat in providing multidimensional protection compared with competing security products?
ACE is a comprehensive, multi-layered, and multidimensional security system. Its one-stop cheat testing platform and automated analysis system significantly improve cheat analysis and forensic efficiency. With industry-leading client hardening technology, it prevents reverse engineering of logic data, raising the barrier to cheat creation. ACE protects games from all angles through a multidimensional anti-cheat system covering samples, functions, behaviors, replays, and image/video analysis.
While many competing products rely primarily on simple client-side blocking or sample-signature matching of known malware patterns, ACE’s true technical moat lies in nearly 20 years of accumulated data, security confrontation technology, and operational experience. Its core competitiveness is a unique multidimensional protection system that combines client-server linkage with software-hardware synergy, including driver-level protection at the lowest hardware layer.
What machine-learning know-how allows ACE to distinguish mobile bot farming (macros) from legitimate players without false positives?
The technical know-how in perfectly distinguishing human players and macro scripts lies in our strict four-stage machine learning closed loop.
First, ACE acquires precise seed labels by collecting confirmed cheat samples as training data. Next, it extracts multidimensional features based on the fundamental differences in touch trajectories and behavior patterns between bot scripts and real players, clearly separating machine commands from human input.
It then cross-checks the account’s user behavior profile, including past gameplay habits, to further improve prediction accuracy. Finally, ACE applies a gray-box feedback mechanism. To ensure that legitimate players are never penalized, all data feedback is repeatedly validated, and formal sanctions are applied only after the possibility of false positives has been fully eliminated.
What are the most threatening new cheat methods today, and what are the preemptive countermeasures?
The most threatening method today is DMA (Direct Memory Access) hardware cheating, which stealthily accesses memory at the lowest hardware level. In response, ACE partnered with Microsoft in 2025 to launch a CPU virtualization-based countermeasure, using hardware security features at the operating-system level to block abnormal memory access at the source.
ACE also introduced an anti-cheat pre-boot mode that intercepts violating processes during the earliest loading stage. During live matches, it uses a replay behavior analysis solution that can immediately interrupt a match once cheating is confirmed, thereby protecting legitimate players. At the same time, Tencent is also pursuing legal enforcement against cheat-production groups. Last year alone, ACE helped police solve more than 40 cases and arrest more than 200 suspects, with the total value of those cases exceeding RMB 100 million.
Can you share a case where the introduction of the industry’s first iOS hardening solution led to the protection of actual revenue by blocking cracked versions?
While the direct revenue impact of security solutions is ultimately assessed by each publisher, there is strongly indicative defense data. For one well-known game publisher in China, more than 300 cracked versions of a single title were being detected each day. After the solution was applied, that number was cut in half immediately and fell to nearly zero within a week.
Another publisher reported that, after implementation, its existing cracked versions became completely unusable. Cheat sellers even complained publicly that the security had made cracking too difficult. In addition to external customers, the solution has been adopted across majority of Tencent’s own games, and to date there has been virtually no feedback indicating that the protection has been successfully breached.
How is the massive database of 150,000 cheat samples applied in real-time to customized dynamic anti-cheat?
The vast sample data accumulated over 20 years is applied in real time across three layers. First is the foundational response layer, where feature data is used directly to train detection models and maintain a high baseline detection rate. Second is the advanced customized response layer, where high-risk feature data is used to quickly formulate game-specific defense strategies for precise intervention. Finally, at the threat-intelligence layer, ACE continuously monitors the global black market to anticipate how cheats are evolving and rapidly converts newly identified threats into deployable countermeasures.
What is the secret to linking client and server-side solutions across multiple environments without performance degradation?
The answer lies in our design philosophy: ACE was built from the outset with extreme performance efficiency in mind, rather than indiscriminate defense. While traditional security software often encrypts entire codebases and places a heavy load on the framework, ACE uses page-skipping encryption, selectively encrypting only core data to strike a balance between security and performance.
Additionally, it’s built on a zero-coupling architecture that is fully independent of the game engine, enabling it to support a wide range of commercial engines without issue. It also incorporates a strict deployment process through a dedicated compatibility lab, along with powerful built-in disaster recovery and fallback mechanisms, to ensure stable gameplay even under extreme network conditions.
VISVISE: A Creativity Amplifier for Game Developers
& Yu Xiang Yang, VISVISE Product Manager, AI Engine Department, Tencent Games
How do you interpret players’ negative sentiment toward AI-generated content, and why does it believe VISVISE can help reduce it?
Players’ negative sentiment toward AI largely stems from the limitations of early AI content, which was often associated with low quality, cheapness, or a lack of copyright protection. If AI is used simply to mass-produce subpar results in the name of cost reduction, gamers will inevitably reject it. Tencent’s position is clear: B2B AI applications must be premised on improving the quality of the final output.
VISVISE is positioned not as an automated replacement for people, but as a professional productivity tool. By taking over repetitive tasks such as auto-skinning, 3D animation generation, and similar production work, it allows artists to focus on core creative tasks such as refining subtle facial expressions and designing distinctive motions. When players experience more vivid characters, richer detail, and higher-quality results, the quality premium created by the technology will naturally help shift perceptions.
What specific efficiency gains has VISVISE delivered, and how much has it shortened development timelines overall?
According to Tencent’s production data, VISVISE delivers order-of-magnitude efficiency gains across pipeline stages. With VISVISE Auto-Skinning, simple tasks are reduced from three days to one day, while complex tasks are shortened from four days to 2.5 days, with average completion times often within four hours. In addition, VISVISE’s MIB intelligent frame interpolation can generate 200 frames of animation in just four seconds, dramatically accelerating a process that would otherwise take days of manual work. AI is also improving efficiency in other production stages, including voice acting and lighting.
These gains make it possible to increase the number of content-iteration cycles and speed up overall development. At the same time, because game development is a complex systems-engineering process, the extent to which total project timelines can be reduced depends on how deeply AI tools are embedded in the production pipeline.
In which creative areas are developers reinvesting the resources saved from automating repetitive tasks, such as high-poly model conversion?
First is expanding the breadth and depth of creativity by designing more diverse characters and environments and experimenting more quickly with new styles. Second, developers can invest more in quality polishing and detail work, such as textures, lighting, and fine facial expressions, thereby raising the artistic quality of the final output.
Third, faster asset production enables gameplay innovation, allowing teams to test new mechanics and build richer level content. Finally, teams can reinvest resources into forward-looking R&D, including rendering technologies and deeper AI integration, to strengthen long-term competitiveness.
How does the intelligent frame interpolation technology that generates animation in just four seconds maintain high fidelity and natural motion, even in physically complex scenes?
The MIB model combines large-scale pre-trained motion representations with an autoregressive generation framework, enabling it to learn and understand the physical transition rules between movements. Even for complex actions such as combat or dancing, it can generate in-between frames that conform to human dynamics and natural shifts in center of gravity. This significantly reduces common issues such as foot sliding and shape distortion. For realistic locomotion, the resulting quality is already approaching that of optical motion capture.
What is the technical principle behind accurate 3D motion capture even when figures are overlapped or occluded in a video?
Occlusion introduces ambiguity, which is a persistent industry challenge. Rather than simply copying visible movement, VISVISE is built on a generative foundation model that deeply analyzes contextual information before and after the occluded segment. By evaluating the probability distribution of motion in the hidden portion, it predicts the most physically plausible continuation, allowing it to generate smooth and coherent motion even when people overlap or body parts are obscured.
How much has one-click AI skinning and intelligent rigging reduced the burden of post-processing for complex clothing?
For standard humanoids and clothing, we have achieved an automation rate of over 90% for both the main skeleton and physical bones. Artists no longer need to start from scratch as they can simply review the results and make minor refinements. Even for complex AAA-grade models, the rate exceeds 50%, and our dedicated AI module for multi-layered skirts has pushed automation up to 80%.
Traditionally, complex clothing skinning that once took 1.5 to 3.5 days is now compressed to 1.5 to 4 hours, while simple clothing takes under 1.5 hours. As a result, grueling manual labor that took days has transitioned into a few hours of review, reducing the workload by 80–90%.
What is VISVISE’s mid- to long-term roadmap, including planned multimodal AI capabilities?
For us, technical roadmap planning always revolves around one core proposition: how to make AI-generated content truly part of the industrialized production pipeline. Tencent will continue to explore multimodal technologies that can convert diverse forms of input into production-ready assets, but development priorities will remain grounded in the real needs of artists working in production.
Rather than chasing vague technical buzzwords, Tencent says it prefers to focus on solving the genuine pain points of frontline art teams, following a “look three steps ahead, take one step at a time” approach. The team is already working closely with technical artists and project teams to identify bottlenecks in real production workflows and iteratively improve them. Going forward, VISVISE plans to introduce more production-oriented art optimization processes and AI agent capabilities to make workflows more intuitive, accessible, and efficient.
This article was translated from the original that appeared on INVEN.
Sort by:
Comments :0
