In 1995, writing a 3D game meant negotiating with a dozen different graphics cards, each with its own API. DirectX ended that. It gave developers one standard interface and let hardware fragmentation become somebody else's problem. The result was better games and, at the same time, an explosion of developers who could suddenly build things that had previously been possible only for teams with armies of low-level engineers. 


On-device AI in gaming is at exactly that inflection point today. And Tryll Engine is building the DirectX for it. 


The Problem Is Real and Getting Worse 

Game developers want AI. Players want AI — smarter NPCs, context-aware assistants, dynamic worlds that react to what they actually do. The gap between what players expect and what studios ship is widening fast. 


Cloud LLMs are not the answer. Per-request billing at scale turns into a cost that makes the feature economically irrational. Data egress creates compliance exposure that legal teams veto. And dependency on an external API means your game's quality is at the mercy of a third party, so if they deprecate a model, your game is effectively dead until you push a manual update. 


Self-hosting is not the answer either. While great games still require bespoke prompt engineering, Tryll shifts the burden from deep AI engineering to creative prompting — a skill many game designers already possess or can easily acquire. 


Studios have been stuck between two bad options. Tryll Engine is the third option. 


 

What Tryll Actually Does 

Tryll handles the heavy lifting at the infrastructure level: it selects and runs the optimal quantized open-source model (1B–7B), manages Speech-to-Text and Text-to-Speech, and handles complex tool calling and knowledge search—all while ensuring a controlled load on the player's hardware. It’s not just about running a model; it’s about managing the entire local AI lifecycle without killing the framerate. 


Above this core, Tryll provides a flexible SDK that lets developers build custom AI pipelines tailored to their game's specific needs. Recognizing that most studios aren't AI labs, Tryll bridges the gap with extensive documentation, a library of examples, and a community-driven approach to prompt engineering and model fine-tuning. Whether it’s a lore-heavy NPC or a complex game-state assistant, Tryll provides the stack to build it without needing a PhD in machine learning. 
 


Why the Timing Is Right 

Consumer GPUs crossed a threshold. An 8GB VRAM card — now mid-range — can run a quantized 7B model at acceptable quality. Two years ago this was a research curiosity. Today it's the configuration sitting in hundreds of millions of gaming PCs. The hardware democratization happened; the software layer to exploit it hadn't caught up. 


Simultaneously, open-source model quality has improved dramatically. Llama, Mistral, and their successors have closed the gap with proprietary models for narrow, well-prompted tasks. An NPC that knows your game's lore doesn't need GPT-5 — it needs a good 3B model with excellent RAG. Tryll's architecture is built precisely around this insight. 


The market numbers confirm the direction: AI in gaming projected to grow from $3.3B in 2024 to over $51B by 2033. The interesting question isn't whether AI will be in games — it's which infrastructure layer wins. We think it's on-device, and we think Tryll is building that layer. 
 


The Competitive Moat 

Inworld AI ($125M raised) built for cloud-hybrid, which is fine until studios look at the bill. Convai and Artificial.Agency are solving conversational AI, not infrastructure. None of them are focused on the stack that makes on-device deployment turnkey. 


Tryll's moat deepens with every studio that ingests game content, approves prompts, and embeds the SDK. Switching costs compound: it's not just replacing a library, it's re-doing the knowledge pipeline, re-validating outputs, re-integrating the runtime. That's a meaningful lock-in that grows with usage. 



What We Expect to Be True 

We're backing a thesis, not just a product. The thesis: on-device AI in games will follow the same path as 3D graphics acceleration. It starts fragmented and expensive, then a standardization layer emerges, and suddenly a feature that required a specialist team is accessible to a solo developer with a weekend. 


Tryll Engine is that standardization layer. The developer who today spends three months integrating a cloud LLM into their Unity game will, in two years, drop in the Tryll plugin and have something better running in a week. 


 

The most important infrastructure bets don't look like infrastructure. They look like a developer tool. DirectX looked like a developer tool. Tryll Engine looks like a developer tool. That's exactly right.