Paper Presentation

One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation

- By Fabian Paischer, PhD student, Johannes Kepler University

Read the Paper

This session explores an exciting advancement in fine-tuning methods introduced by Fabian Paischer and his team from Johannes Kepler University, in their paper titled "One Initialization to Rule Them All: Fine-tuning via Explained Variance Adaptation (EVA)." With the rapid evolution of foundation models (FMs), the need for efficient, scalable fine-tuning methods has grown. EVA answers this need by offering a new approach that enhances fine-tuning adaptability and efficiency for downstream tasks across language processing, vision, and reinforcement learning.

  • Introduction to EVA's Advantages: Learn how EVA’s approach improves upon traditional fine-tuning methods by maximizing efficiency and performance across diverse AI applications.
  • Real-World Applications and Use Cases: Explore the applicability of EVA in various domains, showcasing how it can be deployed to accelerate development cycles in AI projects.

Meet our Speaker:

Fabian Paischer

Fabian Paischer is a fourth-year PhD student at Johannes Kepler University in Linz, Austria, supervised by Sepp Hochreiter, the inventor of LSTM. He is also an ELLIS PhD student co-supervised by Marc Deisenroth at University College London. During his PhD, Fabian completed a 6-month research stay at UCL and a 4-month internship with Meta, where he focused on generative modeling and in-context learning for sequential recommendation. His research primarily lies at the intersection of Deep Reinforcement Learning and Natural Language Processing, with a recent emphasis on parameter-efficient fine-tuning techniques. Fabian is currently on the verge of completing his PhD and will subsequently begin a postdoc position at his lab.