By Yipeng Ouyang · May 9, 2026
The project had gone by many names: Skill Compiler, Nexa Skill Compiler (NSC), Agent Skill Compiler (ASC). Each captured part of the vision but none felt definitive. In May 2026, we settled on SkCC — short, memorable, and directly encoding the project's essence: Skill Compilation for Cross-framework agents.
The naming wasn't just cosmetic. It reflected a maturation of the project's identity. We weren't just building a tool for the Nexa ecosystem — we were building a standard for the entire agent skill ecosystem. The name needed to stand on its own.
SkCC = Skill Compiler for Cross-framework LLM Agents. Write once, run anywhere.
With the compiler stable and experiments complete, we turned to writing. The paper — "SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents" — was structured around the "From Engineering Implementation to Scientific Argument" methodology:
The numbers told a compelling story. SkCC-compiled skills consistently outperformed format-agnostic baselines:
The ablation study was particularly revealing. The same Kimi-compiled format produced dramatically different effects across models: strongly positive on Kimi (d=+0.33), neutral on GLM-5 (d=−0.03), and slightly negative on DeepSeek (d=−0.14). This proved that compilation gains are model-specific — there is no one-size-fits-all optimal format. This finding is the empirical foundation for SkCC's multi-backend architecture.
On May 9, we pushed the final version to GitHub. The repository included:
The paper was submitted to the AgentSkills'26 Workshop at ACM CAIS 2026 and uploaded to arXiv (2605.03353).
SkCC is just the beginning. The compiler architecture is designed for extensibility — new framework emitters can be added by implementing a single trait. We're exploring:
The vision remains the same as that March whiteboard sketch: write once, run anywhere — but now it's not just a sketch. It's a working compiler, a submitted paper, and an open-source project ready for the community.