Joshua P. Gardner |
I am currently pursuing a PhD in computer science at the University of Washington's Paul G. Allen School of Computer Science & Engineering, where I am fortunate to be advised by Ludwig Schmidt and Zoran Popović . I hold an M.S. in Applied Statistics and an M.S. in Information Science from the University of Michigan. I also hold a B.A. with Highest Honors in Philosophy from the University of Michigan.
My research focuses on empirical machine learning: characterizing the conditions under which modern machine learning models succeed and fail, and using this understanding to develop improved methods for machine learning. My current research centers on training and fine-tuning large "foundation"-type models and empirically assessing their prediction, robustness, and generalization capabilities, with the aim of using this empirical understanding to select and design new methods that address these limitations. I have studied a diverse set of domains and applications under this general theme, including tabular and structured data; multimodal learning; music and audio; and federated and collaborative learning.
Previously, I was fortunate to spend Summer 2023 as a Research Scientist Intern at Spotify Research building LLark. Before that, I spent just shy of two years as a Research Intern + Student Researcher on the Magenta team at Google DeepMind (fka Brain), working on core machine learning problems in the music and audio domain, including MT3 (see additional publications here).
Awards and honors for my past work include a Best Paper Award at the International Conference on Learning Analytics and Knowledge (LAK), the Margaret Mann Award, the UMSI Professional Practice Fellowship, and the William K. Frankena Prize.
I am on the job market! I am seeking industry research roles that will allow me to contribute to high-impact ML/AI research. Please contact me using the info above if you think there may be a fit.
Selected Publications
For a full list of publications see my research page or Google Scholar profile.
-
LLark: A Multimodal Instruction-Following Language Model for Music
Josh Gardner, Simon Durand, Daniel Stoller, Rachel Bittner.
[arxiv] [code] [web] [blog]
-
Benchmarking Distribution Shift in Tabular Data with TableShift
Josh Gardner, Zoran Popović, Ludwig Schmidt.
Neural Information Processing Systems (NeurIPS) 2023 (Datasets & Benchmarks Track).
[arxiv] [code] [web]
-
Cross-Institutional Transfer Learning for Educational Models: Implications for Model Performance, Fairness, and Equity
Josh Gardner, Renzhe Yu, Quan Nguyen, Christopher Brooks, Rene Kizilcec.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) 2023.
[pdf] [arxiv] [code]
-
Subgroup Robustness Grows on Trees: An Empirical Baseline Study
Josh Gardner, Zoran Popović, Ludwig Schmidt.
Neural Information Processing Systems (NeurIPS) 2022.
[arxiv] [code]
-
OpenFlamingo: An Open-Source Framework for Training Vision-Language Models with In-Context Learning
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt.
[arxiv] [blog] [code]
-
MT3: Multi-Task Multitrack Music Transcription
Josh Gardner, Ian Simon, Ethan Manilow, Curtis Hawthorne, Jesse Engel.
International Conference on Learning Representations (ICLR) 2022.
Spotlight Presentation (top 6.7% of submissions)
[arxiv] [web] [blog] [code]
-
Evaluating the Fairness of Predictive Student Models Through Slicing Analysis
Josh Gardner, Christopher Brooks, and Ryan Baker.
International Conference on Learning Analytics and Knowledge (LAK) 2019.
Best Paper Award
[pdf]