superbpe
Train and use SuperBPE tokenizers for 20-33% token reduction across any project. Covers training, optimization, validation, and integration with any LLM framework. Use when you need efficient tokeniza
Also installable via skills CLI
npx skills add ScientiaCapital/unsloth-mcp-server/testing/superbpe