There is a newer version of the record available.

Published January 8, 2026 | Version V1.0
Preprint Open

GEKO: Gradient-Efficient Knowledge Optimization, A Plug and Play Training Framework for Intelligent Sample Selection

  • 1. Independent Researcher

Description

GEKO (Gradient-Efficient Knowledge Optimization) is a plug and play training framework that achieves 30-50% compute savings through intelligent sample selection. The framework introduces three core innovations:

1. Four-Bucket Partitioning: Classifies samples into FREEZE, LIGHT, FOCUS, and HARD buckets based on model confidence and correctness
2. Mountain Curriculum: A non-monotonic Easy→Hard→Easy training progression that prevents catastrophic forgetting
3. Per-Sample Q-Value Learning: Tracks individual sample learnability over time, enabling dynamic bucket transitions

The key insight is that samples where the model is confident but wrong (HARD bucket) provide maximum learning signal, while confident and correct samples (FREEZE bucket) can be safely skipped. Like LoRA revolutionized fine-tuning through parameter efficiency, GEKO revolutionizes training through sample efficiency.

Implementation available at: https://github.com/ra2157218-boop/GEKO
PyPI: pip install gekolib

Files

geko_paper.pdf

Files (643.2 kB)

Name Size Download all
md5:75e305947a45315db996d927f7113909
643.2 kB Preview Download

Additional details

Software

Repository URL
https://github.com/ra2157218-boop/GEKO
Programming language
Python
Development Status
Active