LLM Fine-tuning

what is fine-tuning

why LLM needs fine-tuning

In-context learning methods

  • zero shot: creating an initial prompt that states the task to be completed but not includes examples
  • one/few shot: creating an initial prompt that states the task to be completed and includes a single/few of example question with answer

Types of LLM fine-tuning

Full fine-tuning

LLM instruction fine-tuning

Parameter efficient fine-tuning (PEFT)

Catastrophic forgetting

  • = a machine learning model forgets previously learned information as it learns new information
  • it happens because fine-tuning can significantly increase the performance of a model on a specific task BUT can lead to reduction in ability on other tasks
  • how to avoid it
    • fine-tune on multiple tasks
    • consider parameter efficient fine-tuning (PEFT)

PEFT techniques

LLM evaluation

metrics

evaluation benchmarks