training-optimization
Advanced techniques for optimizing LLM fine-tuning. Covers learning rates, LoRA configuration, batch sizes, gradient strategies, hyperparameter tuning, and monitoring. Use when fine-tuning models for
Also installable via skills CLI
npx skills add ScientiaCapital/unsloth-mcp-server/.claude/skills/training-optimization