Get in Touch

Course Outline

Introduction to Parameter-Efficient Fine-Tuning (PEFT)

  • Exploring the motivation and limitations associated with full fine-tuning.
  • Overview of PEFT: key objectives and benefits.
  • Real-world applications and industry use cases.

LoRA (Low-Rank Adaptation)

  • Conceptual framework and intuition behind LoRA.
  • Implementing LoRA with Hugging Face and PyTorch.
  • Practical exercise: Fine-tuning a model using LoRA.

Adapter Tuning

  • Understanding how adapter modules function.
  • Integration with transformer-based architectures.
  • Practical exercise: Applying Adapter Tuning to a transformer model.

Prefix Tuning

  • Utilizing soft prompts for the fine-tuning process.
  • Evaluating strengths and limitations relative to LoRA and adapters.
  • Practical exercise: Executing Prefix Tuning on an LLM task.

Evaluating and Comparing PEFT Methods

  • Key metrics for assessing performance and efficiency.
  • Trade-offs involving training speed, memory consumption, and accuracy.
  • Conducting benchmarking experiments and interpreting results.

Deploying Fine-Tuned Models

  • Procedures for saving and loading fine-tuned models.
  • Considerations for deploying PEFT-based models.
  • Seamless integration into applications and data pipelines.

Best Practices and Extensions

  • Combining PEFT with quantization and distillation techniques.
  • Application in low-resource and multilingual contexts.
  • Emerging trends and active research areas.

Summary and Next Steps

Requirements

  • A foundational understanding of machine learning principles
  • Practical experience working with large language models (LLMs)
  • Proficiency in Python and PyTorch

Target Audience

  • Data scientists
  • AI engineers
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories