Thu, December 10, 2020
Public Access


Category:
Category: All

10
December 2020
Mon Tue Wed Thu Fri Sat Sun
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      
8:00am  
9:00am  
10:00am  
11:00am  
12:00pm  
1:00pm  
2:00pm  
3:00pm  
4:00pm  
5:00pm [5:00pm] Anirbit Mukherjee
Description:
Title: Mathematics of deep learning. Speaker: Anirbit Mukherjee Time: 5:00 pm Joining link : https://meet.google.com/kyz-exxu-hvw ABSTRACT : One of the paramount mathematical mysteries of our times is to be able to explain the phenomenon of deep-learning. Neural nets can be made to paint while imitating classical art styles or play chess better than any machine or human ever and they seem to be the closest we have ever come to achieving "artificial intelligence". But trying to reason about these successes quickly lands us into a plethora of extremely challenging mathematical questions - typically about discrete stochastic processes. Some of these questions remain unsolved for even the smallest neural nets! In this talk we will give a brief introduction to neural nets and describe two of the most recent themes of our work in this direction. Firstly we will explain how under certain structural and mild distributional conditions our iterative algorithms like ``Neuro-Tron" which do not use a gradient oracle can often be proven to train nets using as much time/sample complexity as expected from gradient based methods but in regimes where usual algorithms like (S)GD remain unproven. Our theorems include the particularly challenging regime of non-realizable data. Next we will briefly look at our first-of-its-kind results about sufficient conditions for fast convergence of standard deep-learning algorithms like RMSProp, which use the history of gradients to decide the next step. In the second half of the talk, we will focus on the recent rise of the PAC-Bayesian technology in being able to explain the low risk of certain over-parameterized nets on standardized tests. We will present our recent results in this domain which empirically supersede some of the existing theoretical benchmarks in this field and this we achieve via our new proofs about the key property of noise resilience of nets. This is joint work with Amitabh Basu (JHU), Ramchandran Muthukumar (JHU), Jiayao Zhang (UPenn), Dan Roy (UToronto, Vector Institute), Pushpendre Rastogi (JHU ->Amazon), Soham De (DeepMind, Google), Enayat Ullah (JHU), Jun Yang (UToronto, Vector Institute) and Anup Rao (Adobe). About the speaker : Anirbit Mukherjee finished his Ph.D. in applied mathematics at the Johns Hopkins University advised by Prof. Amitabh Basu. He is now a post-doc at Wharton (UPenn) with Prof. Weijie Su. He specializes in deep-learning theory.

6:00pm  
7:00pm [7:00pm] Raman Sanyal, Goethe-Universität Frankfurt.
Description:
Title: Standard monomials, matroids, and lattice paths. Time: 7pm IST, gate opens 6:45pm IST. Google meet link: meet.google.com/cbv-twjd-vno. Phone: (US) +1 401-764-4238‬ PIN: ‪316 692 499‬# Speaker Raman Sanyal, Goethe-Universität Frankfurt. Abstract: Every finite collection of points is the set of solutions to some system of polynomial equations. This is a (computationally) reasonable representation, in particular when writing down defining equations is easier than the actual points. Motivated by Grobner basis theory for finite point configurations, I will discuss standard complexes of 0/1-point configurations. For a matroid basis configuration, the corresponding standard complex is a subcomplexes of the independence complex, which is invariant under matroid duality. For the lexicographic term order, the standard complexes satisfy a deletion-contraction-type recurrence. For lattice path matroids these complexes can be explicitly described in terms of lattice path combinatorics. The talk is based on work with Alexander Engstrom and Christian Stump.