training-optimization

Advanced techniques for optimizing LLM fine-tuning. Covers learning rates, LoRA configuration, batch sizes, gradient strategies, hyperparameter tuning, and monitoring. Use when fine-tuning models for

by ScientiaCapital· Repository·data
Also installable via skills CLI
npx skills add ScientiaCapital/unsloth-mcp-server/.claude/skills/training-optimization

Source

Path:.claude/skills/training-optimization(main)

Related in data

training-optimization | AgentArea Skills