About 53 results
Open links in new tab
  1. GitHub - huggingface/peft: PEFT: State-of-the-art Parameter-Efficient ...

    Visit the PEFT organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks.

  2. Releases · huggingface/peft - GitHub

    MiSS @JL-er added a new PEFT method, MiSS (Matrix Shard Sharing) in #2604. This method is an evolution of Bone, which, according to our PEFT method comparison benchmark, gives excellent …

  3. peft/README.md at main · huggingface/peft · GitHub

    Visit the PEFT organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks.

  4. GitHub - modelscope/ms-swift: Use PEFT or Full-parameter to …

    Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3, Qwen3-MoE, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, …

  5. GitHub - Vaibhavs10/fast-whisper-finetuning

    Cue: PEFT! With PEFT you can tackle this bottleneck head-on. Like Low Rank Adaptation (LoRA), PEFT only fine-tunes a small number of (extra) model parameters while freezing most parameters of …

  6. GitHub - ashishpatel26/LLM-Finetuning: LLM Finetuning with peft

    Welcome to the PEFT (Pretraining-Evaluation Fine-Tuning) project repository! This project focuses on efficiently fine-tuning large language models using LoRA and Hugging Face's transformers library.

  7. Cannot import name 'EncoderDecoderCache' from 'transformers'

    Dec 21, 2024 · @Huang-jia-xuan did this solve your issue? I also ran into the same issue today while using PEFT. This was my fix:

  8. Optimum-Benchmark ️ - GitHub

    About 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.

  9. GitHub - Joluck/RWKV-PEFT

    RWKV-PEFT is the official implementation for efficient parameter fine-tuning of RWKV models, supporting various advanced fine-tuning methods across multiple hardware platforms.

  10. PEFT - GitHub

    Visit the PEFT organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks.