I am a Ph.D. student in Electrical and Computer Engineering at UC Santa Barbara, where I am fortunate to be advised by Prof. Christos Thrampoulidis. My research interests are broadly in statistical learning theory, optimization and high-dimensional statistics.
Email : myfirstname at ucsb.edu
(Mar 24) Our work on the dynamics of multi-head attention is accepted at TMLR. Congrats to co-authors!
(Feb 24) I recieved the dissertation fellowship from the ECE department at UCSB.
(Jan 24) "Generalization and Stability of Neural Networks with Minimal Width" is accepted (pending minor revision) in JMLR!
(Nov 23) Happy to be a Top Reviewer of NeurIPS 2023.
(Oct 23) New preprint on the attention mechanism.
(Jun 23) Our paper on Asymptotic Behavior of Adversarial Training is accepted at IEEE Transactions on Neural Networks and Learning Systems.
(Jun 23) Two papers are accepted at ICML workshop on High-dimensional Learning Dynamics.
(Mar 23) Excited about our recent paper achieving minimum width for optimal generalization in interpolating neural networks.
(Jan 23) Our paper "On Generalization of Decentralized Learning with Separable Data" is accepted at AISTATS 2023.
(Nov 22) Our paper on "Fast Convergence in Learning Two-layer Neural Networks with Separable Data" is accepted at AAAI 2023.
(Nov 22) I will present my paper on "decentralized learning with separable data" in the NeurIPS OPT22 Workshop on Optimization for Machine Learning.
On the Optimization and Generalization of Multi-head Attention [arXiv]
P. Deora*, R. Ghaderi*, H. Taheri*, C. Thrampoulidis
Preprint
Generalization and Stability of Interpolating Neural Networks with Minimal Width [arXiv]
H. Taheri, C. Thrampoulidis
Accepted to appear (pending minor revision) at the Journal of Machine Learning Research (JMLR)
Asymptotic Behavior of Adversarial Training in Binary Linear Classification [arXiv]
H. Taheri, R. Pedarsani, C. Thrampoulidis
IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS).
Short version appeared in ISIT, 2022.
On Generalization of Decentralized Learning with Separable Data [arXiv]
H. Taheri, C. Thrampoulidis
International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Fast Convergence in Learning Two-layer Neural Networks with Separable Data [arXiv]
H. Taheri, C. Thrampoulidis
AAAI Conference on Artificial Intelligence (AAAI), 2023
Fundamental limits of ridge-regularized empirical risk minimization in high dimensions [arXiv]
H. Taheri, R. Pedarsani, C. Thrampoulidis
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.
Quantized Decentralized Stochastic Learning over Directed Graphs [arXiv]
H. Taheri, A. Mokhtari, H. Hassani, R. Pedarsani
International Conference on Machine Learning (ICML), 2020.
Sharp Asymptotics and Optimal Performance for Inference in Binary Models [arXiv]
H. Taheri, R. Pedarsani, C. Thrampoulidis
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
A longer version of this paper merged with a paper from ISIT appeared in the journal Entropy.
Sharp guarantees for solving random equations with one-bit information [pdf]
H. Taheri, R. Pedarsani, C. Thrampoulidis
Allerton Conference on Communication, Control, and Computing, 2019.
Robust and communication-efficient collaborative learning [arXiv]
A. Reisizadeh, H. Taheri, A. Mokhtari, H. Hassani, R. Pedarsani
Neural Information Processing Systems (NeurIPS), 2019.
Electrical and Computer Engineering, Ph. D. , (2018-present)
UC Santa Barbara, CA, USA
Electrical Engineering and Mathematics, B. Sc. , (2013-2018)
Sharif University of Technology, Tehran, Iran
Machine Learning, at Sharif University, EE department
Abstract Algebra, at Sharif University, Math department
Engineering Mathematics, at Sharif University, EE department
Principles of Electrical Engineering, at Sharif University, EE department
Signal Analysis, at UC Santa Barbara, ECE department