← Back to Blog
May 2026

SkCC: The Final Form
Paper Submission & Open Source

By Yipeng Ouyang · May 9, 2026

The Name

The project had gone by many names: Skill Compiler, Nexa Skill Compiler (NSC), Agent Skill Compiler (ASC). Each captured part of the vision but none felt definitive. In May 2026, we settled on SkCC — short, memorable, and directly encoding the project's essence: Skill Compilation for Cross-framework agents.

The naming wasn't just cosmetic. It reflected a maturation of the project's identity. We weren't just building a tool for the Nexa ecosystem — we were building a standard for the entire agent skill ecosystem. The name needed to stand on its own.

SkCC = Skill Compiler for Cross-framework LLM Agents. Write once, run anywhere.

The Paper

With the compiler stable and experiments complete, we turned to writing. The paper — "SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents" — was structured around the "From Engineering Implementation to Scientific Argument" methodology:

  • Motivation: Format sensitivity is a first-class concern. The O(m×n) adaptation burden is unsustainable. Security vulnerabilities are widespread and unaddressed.
  • Insight: A unified IR can decouple skill semantics from framework-specific formatting, mirroring classical compiler architecture.
  • Design: Four-phase pipeline — Syntax Parser → IR Builder → Security Optimizer → Target Emitter.
  • Evidence: SkillsBench experiments across 4 frameworks, ablation studies proving model-specificity, security and efficiency metrics.

Experimental Results

The numbers told a compelling story. SkCC-compiled skills consistently outperformed format-agnostic baselines:

  • Kimi CLI: 35.1% → 48.7% pass rate (+13.5pp, p=0.0063)
  • Claude Code: 21.1% → 33.3% pass rate (+12.2pp, p=0.0103)
  • Codex CLI: 38.5% → 42.3% pass rate (+3.8pp)
  • Gemini CLI: 22.2% → 22.2% pass rate (+0.019 reward gain)

The ablation study was particularly revealing. The same Kimi-compiled format produced dramatically different effects across models: strongly positive on Kimi (d=+0.33), neutral on GLM-5 (d=−0.03), and slightly negative on DeepSeek (d=−0.14). This proved that compilation gains are model-specific — there is no one-size-fits-all optimal format. This finding is the empirical foundation for SkCC's multi-backend architecture.

Open Source Release

On May 9, we pushed the final version to GitHub. The repository included:

  • The full Rust compiler (3 crates, 207 tests)
  • npm wrapper package for Node.js users
  • VS Code extension with syntax highlighting and real-time validation
  • Comprehensive documentation (15+ docs files)
  • This website with benchmark leaderboard and development blog

The paper was submitted to the AgentSkills'26 Workshop at ACM CAIS 2026 and uploaded to arXiv (2605.03353).

What's Next

SkCC is just the beginning. The compiler architecture is designed for extensibility — new framework emitters can be added by implementing a single trait. We're exploring:

  • Automated anti-pattern discovery from vulnerability corpora
  • Semantic-level adaptation (instruction simplification, procedure decomposition)
  • Runtime feedback integration for iterative optimization
  • WASM bindings for browser-based skill validation
  • Integration with emerging frameworks (OpenHands, Copilot, LangChain)

The vision remains the same as that March whiteboard sketch: write once, run anywhere — but now it's not just a sketch. It's a working compiler, a submitted paper, and an open-source project ready for the community.