Efficient Deep Learning Reading Group (知乎link)

Are you interested in efficient deep learning but find it hard to keep up with the latest research? Join our Efficient Deep Learning Reading Group! Our group is focused on reading and discussing the most important and influential papers in deep learning, with a special emphasis on efficiency and practical applications. By joining our group, you'll have the opportunity to:

  • Stay up-to-date with the latest research in efficient deep learning, without having to spend countless hours sifting through papers on your own.

  • Engage in thoughtful and productive discussions with other deep learning enthusiasts, sharing your insights and learning from others.

  • Develop a deeper understanding of the key concepts and techniques in deep learning, and how they can be applied in real-world scenarios.

  • Connect with like-minded individuals and build meaningful relationships with friends who are also interested in related fields.

Our group meets once a week via Zoom, and sessions typically run for 60 minutes every Sunday at 9PM (EST) since Feb 26, 2023. We welcome participants of all backgrounds and experience levels, as long as you have a basic understanding of deep learning fundamentals. To ensure a high-quality experience for all members, we do ask that you commit to attending regularly and actively participating in discussions. If you're interested in joining our group, please fill out the application form inside (知乎link). We look forward to hearing from you!

Members:
Wenjin Zhang
Qizhen Ding
Jiechao Gao
Ye Tao
Lujun Li
Yang Zheng
Xiaotian Dou
Jianwei Li
Yu Wu
Xin Huang
Chengyuan Deng
Weizhao Jin
Xudong Wang
Zexing Xu
Yuhang Yao
Rulin Shao
Ruichao Li
Nianyi Wang
Xiang Pan
Pengcheng Wang
Kai Zhang
Bowen Lei
Dan Liu
Yanfeng Qu
Chen Liu
Bin Hu
Huan Wang
Shan Xue
Qi Sun
Jufeng Guo
Yingcong Li
Chengyu Dong
Rui Wang

Organizer:
Yang Sui

02/26/2023

SPDY: Accurate Pruning with Speedup Guarantees, ICML'22.
Elias Frantar, Dan Alistarh

Presenter: Yang Sui
Slides

Sparse Double Descent: Where Network Pruning Aggravates Overfitting, ICML'22.
Zheng He, Zeke Xie, Quanzhi Zhu, Zengchang Qin

Presenter: Yang Sui
Slides

CHEX: CHannel EXploration for CNN Model Compression, CVPR'22.
Zejiang Hou, Minghai Qin, Fei Sun, Xiaolong Ma, Kun Yuan, Yi Xu, Yen-Kuang Chen, Rong Jin, Yuan Xie, Sun-Yuan Kung

Presenter: Yang Sui
Slides

03/05/2023

Consistent Estimators for Learning to Defer to an Expert, ICML'20.
Hussein Mozannar, David Sontag

Presenter: Yu Wu

EasyQuant: Post-training Quantization via Scale Optimization, arXiv'20.
Di Wu, Qi Tang, Yongle Zhao, Ming Zhang, Ying Fu, Debing Zhang

Presenter: Ruichao li

Hermes: An Efficient Federated Learning Framework for Heterogeneous Mobile Clients, MobiCom'21.
Ang Li, Jingwei Sun, Pengcheng Li, Yu Pu, Hai Li, Yiran Chen

Presenter: Bin Hu

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers, NeurIPS'19.
Ari S. Morcos, Haonan Yu, Michela Paganini, Yuandong Tian

Presenter: Qizhen Ding

DepGraph: Towards Any Structural Pruning, CVPR'23.
Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, Xinchao Wang

Presenter: Chengyuan Deng

03/12/2023

Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning, NeurIPS'22.
Elias Frantar, Sidak Pal Singh, Dan Alistarh

Presenter: Jianwei Li
Slides

Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS'22.
Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos

Presenter: Wenjin Zhang
Slides

HYDRA: Pruning Adversarially Robust Neural Networks, NeurIPS'20.
Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana

Presenter: Junfeng Guo

03/19/2023

Stitchable Neural Networks, CVPR'23.
Zizheng Pan, Jianfei Cai, Bohan Zhuang

Presenter: Lujun Li
Slides

Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning.
Huan Wang, Can Qin, Yue Bai, Yun Fu

Presenter: Huan Wang
Slides

AdderNet: Do We Really Need Multiplications in Deep Learning?, CVPR'20.
Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

Presenter: Qi Sun
Slides

04/02/2023

FedSEA: A Semi-Asynchronous Federated Learning Framework for Extremely Heterogeneous Devices, SenSys'22.
Jingwei Sun, Ang Li, Lin Duan, Samiul Alam, Xuliang Deng, Xin Guo, Haiming Wang, Maria Gorlatova, Mi Zhang, Hai Li, Yiran Chen

Presenter: Jiechao Gao

Reversible Vision Transformers, CVPR'22.
Karttikeya Mangalam, Haoqi Fan, Yanghao Li, Chao-Yuan Wu, Bo Xiong, Christoph Feichtenhofer, Jitendra Malik

Presenter: Kai Zhang
Slides

Towards Efficient 3D Object Detection with Knowledge Distillation , NeurIPS'22.
Jihan Yang, Shaoshuai Shi, Runyu Ding, Zhe Wang, Xiaojuan Qi

Presenter: Pengcheng Wang
Slides

04/09/2023

Accelerating Dataset Distillation via Model Augmentation, CVPR'23.
Lei Zhang, Jie Zhang, Bowen Lei, Subhabrata Mukherjee, Xiang Pan, Bo Zhao, Caiwen Ding, Yao Li, Dongkuan Xu

Presenter: Bowen Lei
Slides

Scaling Distributed Machine Learning with In-Network Aggregation, NSDI'21.
Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan Ports, Peter Richtarik

Presenter: Yanfeng Qu
Slides

04/16/2023

Does Knowledge Distillation Really Work?, NeurIPS'21.
Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson

Presenter: Chengyu Dong

FedGCN: Convergence and Communication Tradeoffs in Federated Training of Graph Convolutional Networks.
Yuhang Yao, Weizhao Jin, Srivatsan Ravi, Carlee Joe-Wong

Presenter: Yuhang Yao
Slides

04/30/2023

Towards a Unified View of Parameter-Efficient Transfer Learning, ICLR'22.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig

Presenter: Xiang Pan

Random Feature Attention, ICLR'21.
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong

Presenter: Yingcong Li

BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning, ATC'20.
Chengliang Zhang, Suyi Li, Junzhe Xia, and Wei Wang

Presenter: Weizhao Jin

05/14/2023

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models.
Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han

Presenter: Yang Sui

RPTQ: Reorder-based Post-training Quantization for Large Language Models.
Zhihang Yuan, Lin Niu, Jiawei Liu, Wenyu Liu, Xinggang Wang, Yuzhang Shang, Guangyu Sun, Qiang Wu, Jiaxiang Wu, Bingzhe Wu

Presenter: Yang Sui

Black-Box Tuning for Language-Model-as-a-Service, ICML'22.
Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, Xipeng Qiu

Presenter: Rui Wang



*Last updated on 2023*
Inspired by Jon Barron