2024


Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem (Fivos Kalogiannis and Jingming Yan).
NeurIPS 2024 [Arxiv]

Polynomial Convergence of Bandit No-Regret Dynamics in Congestion Games (Leello Dadi, Stratis Skoulakis, Luca Viano, Volkan Cevher).
WINE 2024 [Arxiv]

Time-Efficient Algorithms for Nash-Bargaining-Based Matching Market Models (with Thorben Tröbst and Vijay Vazirani).
WINE 2024 [Arxiv]

The Computational Complexity of Finding Second-Order Stationary Points
(with Andreas Kontogiannis, Vasilis Pollatos, Sotiris Kanellopoulos, Panayotis Mertikopoulos and Aris Pagourtzis).
ICML 2024 [OpenReview]

Last-iterate Convergence Separation between Extra-gradient and Optimism in Constrained Periodic Games
(with Yi Feng, Ping Li and Xiao Wang).
UAI 2024 [Arxiv]

Learning Nash equilibria in Rank-1 games
(with Nikolas Patris).
ICLR 2024 [OpenReview]

Beating Price of Anarchy and Gradient Descent without Regret in Potential Games
(with Iosif Sakos, Stefanos Leonardos, Stelios Stavroulakis, Will Overman and Georgios Piliouras).
ICLR 2024 [OpenReview]

Optimistic Policy Gradient in Multi-Player Markov Games with a Single Controller: Convergence Beyond the Minty Property (with Ioannis Anagnostides, Gabriele Farina and Tuomas Sandholm).
AAAI 2024 [Arxiv]

Computing Nash Equilibria in Potential Games with Private Uncoupled Constraints
(with Nikolas Patris, Stelios Stavroulakis, Fivos Kalogiannis and Rose Zhang).
AAAI 2024 [Arxiv]

2023


Exponential Lower Bounds for Fictitious Play in Potential Games
(with Nikolas Patris, Stratis Skoulakis and Volkan Cevher).
NeurIPS 2023 [Arxiv]

Zero-sum Polymatrix Markov Games: Equilibrium Collapse and Efficient Computation of Nash Equilibria (with Fivos Kalogiannis).
NeurIPS 2023 [Arxiv]

On the Last-iterate Convergence in Time-varying Zero-sum Games: Extra Gradient Succeeds where Optimism Fails (with Yi Feng, Hu Fu, Qun Hu, Ping Li, Bo Peng and Xiao Wang).
NeurIPS 2023 [Arxiv]

On the Convergence of No-Regret Learning Dynamics in Time-Varying Games
(with Ioannis Anagnostides, Gabriele Farina and Tuomas Sandholm).
NeurIPS 2023 [Arxiv]

Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games
(with Ioannis Anagnostides, Fivos Kalogiannis, Manolis Vlatakis and Stephen McAleer).
EC 2023 [Arxiv]

Semi Bandit dynamics in Congestion Games: Convergence to Nash Equilibrium and No-Regret Guarantees (with Stratis Skoulakis, Luca Viano, Xiao Wang and Volkan Cevher).
ICML 2023 (oral) [Arxiv]

Efficiently Computing Nash Equilibria in Adversarial Team Markov Games (with Fivos Kalogiannis, Ioannis Anagnostides, Manolis Vlatakis, Vaggos Chatziafratis and Stelios Stavroulakis).
ICLR 2023 (oral) [Arxiv]

Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games (with Fivos Kalogiannis and Manolis Vlatakis).
ICLR 2023 [Arxiv]

Mean estimation of truncated mixtures of two Gaussians: A gradient based approach
(with Sai Ganesh Nagarajan, Gerasimos Palaiopanos, Tushar Vaidya and Samson Yu).
AAAI 2023 [Link]

2022


On Scrambling Phenomena for Randomly Initialized Recurrent Networks
(with Vaggos Chatziafratis, Clayton Sanford and Stelios Stavroulakis).
NeurIPS 2022 [Arxiv]

Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games (with Ioannis Anagnostides, Gabriele Farina and Tuomas Sandholm).
NeurIPS 2022 [Arxiv]

On Last-Iterate Convergence Beyond Zero-Sum Games
(with Ioannis Anagnostides, Gabriele Farina and Tuomas Sandholm).
ICML 2022 [Arxiv]

Accelerated Multiplicative Weights Update Avoids Saddle Points almost always
(with Yi Feng and Xiao Wang).
IJCAI 2022 [Arxiv]

Independent Natural Policy Gradient Always Converges in Markov Potential Games
(with Roy Fox, Stephen McAleer and Will Overman).
AISTATS 2022 [Arxiv]

Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games
(with Stefanos Leonardos, Will Overman and Georgios Piliouras).
ICLR 2022 [Arxiv]

Frequency-Domain Representation of First-Order Methods: A Simple and Robust Framework of Analysis (with Ioannis Anagnostides).
SOSA 2022 [Arxiv]

2021


Convergence to Second-Order Stationarity for Non-negative Matrix Factorization: Provably and Concurrently (with Stratis Skoulakis, Antonios Varvitsiotis and Xiao Wang).
[Arxiv]

Last iterate convergence in no-regret learning: constrained min-max optimization for convex-concave landscapes (with Qi Lei, Sai Ganesh Nagarajan and Xiao Wang).
AISTATS 2021 [Arxiv]. This work was part of Qi Lei’s thesis that was awarded.

Efficient Statistics for Sparse Graphical Models from Truncated Samples
(with Arnab Bhattacharya, Rathin Desai and Sai Ganesh Nagarajan)
AISTATS 2021 [Arxiv]

2020


Fast convergence of Langevin dynamics on manifold: Geodesics meet log-Sobolev
(with Xiao Wang and Qi Lei).
NeurIPS 2020 [Arxiv]

Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems
(with Vaggos Chatziafratis and Sai Ganesh Nagarajan).
ICML 2020 [Arxiv]

Logistic regression with peer-group effects via inference in higher-order Ising models
(with Costis Daskalakis and Nishanth Dikkala).
AISTATS 2020 [Arxiv]

Depth-Width Trade-offs for ReLU Networks via Sharkovsky’s Theorem
(with Vaggos Chatziafratis, Sai Ganesh Nagarajan and Xiao Wang).
ICLR 2020 (spotlight) [Arxiv], [MIFODS Talk]

On the Analysis of EM for truncated mixtures of two Gaussians
(with Sai Ganesh Nagarajan).
ALT 2020 [Arxiv]

2019


First-order methods Almost Always Avoid Saddle Points: The case of Vanishing Step-sizes
(with Xiao Wang and Georgios Piliouras).
NeurIPS 2019 [Arxiv]

Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always
(with Georgios Piliouras and Xiao Wang).
ICML 2019 [Arxiv]

Regression from Dependent Observations
(with Costis Daskalakis and Nishanth Dikkala).
STOC 2019 [Arxiv]

First-order Methods Almost Always Avoid Saddle Points
(with Jason D. Lee, Georgios Piliouras, Max Simchowitz, Michael I. Jordan and Benjamin Recht).
Math. Programming 2019, issue on non-convex optimization for statistical learning. [Arxiv]
Have a look at this nice exposition about our work!

Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization
(with Costis Daskalakis).
ITCS 2019 [Arxiv], [Slides]

2018


The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization
(with Costis Daskalakis).
NeurIPS 2018 [Arxiv], [Poster]

Cycles in Zero Sum Differential Games and Biological Diversity
(with Tung Mai, Milena Mihail, Will Ratcliff, Vijay V. Vazirani and Peter Yunker).
EC 2018 [Arxiv], [Slides]

2017


Multiplicative Weights Update with Constant step-size in Congestion Games: Convergence, Limit Cycles and Chaos (with Gerasimos Palaiopanos and Georgios Piliouras).
NeurIPS 2017 (spotlight) [Arxiv], [Poster], [Video]

Opinion Dynamics in Networks: Convergence, Stability and Lack of Explosion
(with Tung Mai and Vijay V. Vazirani).
ICALP 2017 [Arxiv], [Slides]

Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions (with Georgios Piliouras).
ITCS 2017 [Arxiv], [Slides], [Video]

Mutation, Sexual Reproduction and Survival in Dynamic Environments
(with Ruta Mehta, Georgios Piliouras, Prasad Tetali and Vijay V. Vazirani).
ITCS 2017 [Arxiv]

Before 2017


The Computational Complexity of Genetic Diversity
(with Ruta Mehta, Georgios Piliouras and Sadra Yazdanbod).
ESA 2016 [Arxiv], [Slides]

Average Case Performance of Replicator Dynamics in Potential Games via Computing Regions of Attraction (with Georgios Piliouras).
EC 2016 [Arxiv]

Mixing Time of Markov Chains, Dynamical Systems and Evolution
(with Nisheeth K. Vishnoi).
ICALP 2016 [PDF]

Evolutionary Dynamics in finite populations mix rapidly
(with Piyush Srivastava and Nisheeth K. Vishnoi).
SODA 2016 [PDF], [Slides]

Natural Selection as an Inhibitor of Genetic Diversity: Multiplicative Weights Updates Algorithm and a Conjecture of Haploid Genetics (with Ruta Mehta and Georgios Piliouras).
ITCS 2015 [Arxiv], [Slides]

Support-theoretic subgraph preconditioners for large-scale SLAM
(with Yong-Dian Jian, Doru Balcan, Prasad Tetali, Frank Dellaert).
IROS 2013 [PDF]

Thesis

PhD: Evolutionary Markov Chains, potential games and optimization under the lens of dynamical systems. [PDF]