Fri, November 9, 2018
Public Access

Category: All

November 2018
Mon Tue Wed Thu Fri Sat Sun
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
11:00am [11:30am] Dale Cutcosky, University of Missouri at Columbia, MO
Date: Friday, 9 November Venue: Room 215 Time: 11.30-1.00 Speaker: Dale Cutcosky, University of Missouri at Columbia, MO Title: Multiplicities and volumes-II Abstract: We show how multiplicities of (not necessarily Noetherian) filtrations on a Noetherian ring can be computed from volumes of appropriate Newton Okounkov bodies. We discuss applications and examples.

4:00pm [4:00pm] Gugan Thoppe, Duke University, Durham, USA
Date and time : 9th November 2018, 4.00 - 5.00 pm, Venue: Ramanujan Hall Title: Concentration Bounds for Stochastic Approximation with Applications to Reinforcement Learning Speaker: Gugan Thoppe Affiliation: Duke University, Durham, USA Abstract: Stochastic Approximation (SA) refers to iterative algorithms that can be used to find optimal points or zeros of a function, given only its noisy estimates. In this talk, I will review our recent advances in techniques for analysing SA methods. This talk has four major parts. In the first part, we will see a motivating application of SA to network tomography and, alongside, discuss the convergence of a novel stochastic Kaczmarz method. In the second part, we shall see a novel analysis approach for non-linear SA methods in the neighbourhood of an isolated solution. The main tools here include the Alekseev formula, which helps exactly compare the solutions of a non-linear ODE to that of its perturbation, and a novel concentration inequality for a sum of martingale differences. In the third part, we will extend the previous tool to the two timescale but linear SA setting. Here, I will also present our ongoing work to obtain tight convergence rates in this setup. In parallel, we will also see how these results can be applied to gradient Temporal Difference (TD) methods such as GTD(0), GTD2, and TDC that are used in reinforcement learning. For the analyses in the second and third parts to hold, the initial step size must be chosen sufficiently small, depending on unknown problem-dependent parameters; or, alternatively, one must use projections. In the fourth part, we shall discuss a trick to obviate this in context of the one timescale, linear TD(0) method. We strongly believe that this trick is generalizable. We also provide here a novel expectation bound. We shall end with some future directions.

[4:00pm] R.V. Gurjar
Geometry and Topology seminar 9th November, 4:00 PM Room 215 Title. Shafarevich question on the universal covering of a smooth projective variety, and it's applications. Speaker. R.V. Gurjar Abstract. I. Shafarevich has raised the following very general question. 'Is the universal covering space of every smooth connected projective variety holomorphically convex ?' This is a generalization of the famous Uniformization Theorem for Riemann Surfaces. We will discuss some applications of a positive solution of the Sharafevich question, viz. A conjecture of Madhav Nori is true, and the second homotopy group of a connected smooth projective surface is a free abelian group. We will also mention positive solutions for the Shafarevich question in several interesting cases.