Sidford received his PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where he was advised by Professor Jonathan Kelner. ReSQueing Parallel and Private Stochastic Convex Optimization. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . << With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f Navajo Math Circles Instructor. I am a senior researcher in the Algorithms group at Microsoft Research Redmond. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. (ACM Doctoral Dissertation Award, Honorable Mention.) ", "Team-convex-optimization for solving discounted and average-reward MDPs! Yin Tat Lee and Aaron Sidford. rl1 Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization Publications and Preprints. data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. [pdf] [talk] [poster] } 4(JR!$AkRf[(t Bw!hz#0 )l`/8p.7p|O~ IEEE, 147-156. Before attending Stanford, I graduated from MIT in May 2018. ", "Sample complexity for average-reward MDPs? The Journal of Physical Chemsitry, 2015. pdf, Annie Marsden. with Yair Carmon, Kevin Tian and Aaron Sidford van vu professor, yale Verified email at yale.edu. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . We also provide two . Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games Computer Science. The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020). << Many of my results use fast matrix multiplication Google Scholar; Probability on trees and . I was fortunate to work with Prof. Zhongzhi Zhang. Here are some lecture notes that I have written over the years. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. [pdf] [poster] With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. . Main Menu. 2019 (and hopefully 2022 onwards Covid permitting) For more information please watch this and please consider donating here! resume/cv; publications. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. SODA 2023: 4667-4767. Email: sidford@stanford.edu. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. In Sidford's dissertation, Iterative Methods, Combinatorial . with Vidya Muthukumar and Aaron Sidford Contact: dwoodruf (at) cs (dot) cmu (dot) edu or dpwoodru (at) gmail (dot) com CV (updated July, 2021) Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 United States. Our method improves upon the convergence rate of previous state-of-the-art linear programming . Aaron Sidford is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). /Length 11 0 R ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. 2013. with Yair Carmon, Arun Jambulapati, Qijia Jiang, Yin Tat Lee, Aaron Sidford and Kevin Tian In submission. Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. the Operations Research group. I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. Conference of Learning Theory (COLT), 2021, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs endobj I am broadly interested in mathematics and theoretical computer science. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. With Cameron Musco and Christopher Musco. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods >> Aleksander Mdry; Generalized preconditioning and network flow problems My CV. I hope you enjoy the content as much as I enjoyed teaching the class and if you have questions or feedback on the note, feel free to email me. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG David P. Woodruff . CoRR abs/2101.05719 ( 2021 ) This is the academic homepage of Yang Liu (I publish under Yang P. Liu). /CreationDate (D:20230304061109-08'00') [pdf] [slides] [name] = yangpliu, Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, Online Edge Coloring via Tree Recurrences and Correlation Decay, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, Discrepancy Minimization via a Self-Balancing Walk, Faster Divergence Maximization for Faster Maximum Flow. The design of algorithms is traditionally a discrete endeavor. Alcatel flip phones are also ready to purchase with consumer cellular. I am broadly interested in mathematics and theoretical computer science. I am 9-21. Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. Secured intranet portal for faculty, staff and students. DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . Intranet Web Portal. /Producer (Apache FOP Version 1.0) ", "General variance reduction framework for solving saddle-point problems & Improved runtimes for matrix games. Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, Online Edge Coloring via Tree Recurrences and Correlation Decay, STOC 2022 Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. [pdf] [talk] I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. publications by categories in reversed chronological order. to be advised by Prof. Dongdong Ge. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. About Me. Email: [name]@stanford.edu The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. to appear in Innovations in Theoretical Computer Science (ITCS), 2022, Optimal and Adaptive Monteiro-Svaiter Acceleration [pdf] Allen Liu. /N 3 ", "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space. Annie Marsden. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! Multicalibrated Partitions for Importance Weights Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder ALT, 2022 arXiv . Title. From 2016 to 2018, I also worked in Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . Articles Cited by Public access. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). Here is a slightly more formal third-person biography, and here is a recent-ish CV. ICML, 2016. Aaron Sidford is an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Email / My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. Follow. Improves the stochas-tic convex optimization problem in parallel and DP setting. Aaron Sidford Stanford University Verified email at stanford.edu. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, Done under the mentorship of M. Malliaris. small tool to obtain upper bounds of such algebraic algorithms. With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco. 2021 - 2022 Postdoc, Simons Institute & UC . If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. I am fortunate to be advised by Aaron Sidford. ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. Aaron Sidford. Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions My research was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship from 2018-2021, and by a Google PhD Fellowship from 2022-2023. Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods with Yair Carmon, Aaron Sidford and Kevin Tian They will share a $10,000 prize, with financial sponsorship provided by Google Inc. Yair Carmon. [pdf] with Yang P. Liu and Aaron Sidford. Summer 2022: I am currently a research scientist intern at DeepMind in London. riba architectural drawing numbering system; fort wayne police department gun permit; how long does chambord last unopened; wayne county news wv obituaries July 8, 2022. We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Contact. I enjoy understanding the theoretical ground of many algorithms that are ?_l) Simple MAP inference via low-rank relaxations. Journal of Machine Learning Research, 2017 (arXiv). [pdf] Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. xwXSsN`$!l{@ $@TR)XZ( RZD|y L0V@(#q `= nnWXX0+; R1{Ol (Lx\/V'LKP0RX~@9k(8u?yBOr y I am a fourth year PhD student at Stanford co-advised by Moses Charikar and Aaron Sidford. In International Conference on Machine Learning (ICML 2016). We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. with Yair Carmon, Arun Jambulapati and Aaron Sidford . I regularly advise Stanford students from a variety of departments. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. COLT, 2022. arXiv | code | conference pdf (alphabetical authorship), Annie Marsden, John Duchi and Gregory Valiant, Misspecification in Prediction Problems and Robustness via Improper Learning. ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . In this talk, I will present a new algorithm for solving linear programs. which is why I created a O! ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). My long term goal is to bring robots into human-centered domains such as homes and hospitals. International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle By using this site, you agree to its use of cookies. [last name]@stanford.edu where [last name]=sidford. "FV %H"Hr ![EE1PL* rP+PPT/j5&uVhWt :G+MvY c0 L& 9cX& Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. ", Applied Math at Fudan We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. Neural Information Processing Systems (NeurIPS), 2014. Two months later, he was found lying in a creek, dead from . Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. [pdf] Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis.
How Did Toya Wright Meet Robert Rushing, Articles A