About Me
Hi, I’m Sid, a first year PhD Student studying Computer Science at UCLA. I’m advised by Professor Baharan Mirzasoleiman. I’m currently interested in understanding and exploiting data in multimodal learning and self-supervised learning (data pruning, dataset distillation, data poisoning etc.)
Feel free to hit me up to talk about any of these topics!
In my free time, I like to write (https://medium.com/@sjoshi804), cook, play chess and run.
News
- May 2023: Data Efficient Contrastive Learning: Easy Examples Contribute the Most and Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression accepted to ICML 2023!
- February 2023: Data Efficient Contrastive Learning: Easy Examples Contribute the Most Preprint on Arxiv
- July 2022: Low Rank Pruning via Output Perturbation at Sparsity in Neural Networks Workshop
Publications
[1] Siddharth Joshi and Baharan Mirzasoleiman, Data Efficient Contrastive Learning: Easy Examples Contribute the Most.
[2] Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen and Baharan Mirzasoleiman, Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression.
[3] Siddharth Joshi*, Yuhan Liu* and Baharan Mirzasoleiman, Low Rank Pruning via Output Perturbation at Sparsity in Neural Networks Workshop 2022
* = equal contribution