## Page Not Found

Page not found. Your pixels are in another canvas. ** Read more**

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Page not found. Your pixels are in another canvas. ** Read more**

This is a page not in th emain menu ** Read more**

** Posted on: **

This article is in continuation of my previous blog, and discusses about a section of the work by Jeffery Z. HaoChen and Suvrit Sra 2018, in which the authors come up with a non-asymptotic rate of for Random Shuffling Stochastic algorithm which is strictly better than that of SGD. ** Read more**

** Posted on: **

This article is on the work by Défossez and Bach 2014, in which the authors develop an operator view point for analyzing Averaged SGD updates to show the Bias-Variance Trade-off and provide tight convergence rates of Least Mean Squared problem. ** Read more**

** Posted on: **

This article is on the recent work by Ying et. al. 2018, in which the authors show that SGD with Random Reshuffling outperforms independent sampling with replacement. ** Read more**

** Posted on: **

This post contains a summary and survey of the Nesterov’s accelerated gradient descent method and some insightful implications that can be derived from it. We analyze the simple convex quadratic case and have a close look at the dynamics of the error vectors. ** Read more**

** Posted on: **

With a number of courses, books and reading material out there here is a list of some which I personally find useful for building a fundamental understanding in Machine Learning. ** Read more**

** Posted on: **

This post contains a summary and survey of the theoretical understandings of Large Scale Optimization by referring some talks, papers, and lectures that I have come across in the recent. ** Read more**

** Posted on: **

We used data structures like Hash Tables, Balanced Trees in order to design a text search engine that gives the frequency of the searched word in a given folder of files. ** Read more**

** Posted on: **

Using Soft Margin Kernel Support Vector Machine to classify newspaper articles to model an Economic Policy Uncertainty Index for India. ** Read more**

** Posted on: **

The project aims at using different recommendation methods for different kinds of real world data like rating matrices, images and text, using Deep Learning and Optimization. ** Read more**

** Posted on: **

We provide a formulation of empirical bayes described by Atchadé (2011) to tune the hyperparameters of priors used in Bayesian set up of collaborative filter. ** Read more**

** Posted on: **

We propose Clustered Monotone Transforms for Rating Factorization (CMTRF), a novel approach to perform regression up to unknown monotonic transforms over unknown population segments. For recommendation systems, the technique searches for monotonic transformations of the rating scales resulting in a better fit. This is combined with an underlying matrix factorization regression model that couples the user-wise ratings to exploit shared low dimensional structure. The rating scale transformations can be generated for each user (N-CMTRF), for a cluster of users (CMTRF), or for all the users at once (1-CMTRF), forming the basis of three simple and efficient algorithms proposed, all of which alternate between transformation of the rating scales and matrix factorization regression. Despite the non-convexity, CMTRF is theoretically shown to recover a unique solution under mild conditions. ** Read more**

** Posted on: **

We study the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function. Under the assumption that the objective function satisfies restricted strong convexity (RSC), we analyze Orthogonal Matching Pursuit (OMP) and obtain support recovery result as well as a tight generalization error bound for OMP. Furthermore, we obtain lower bounds for OMP, showing that both our results on support recovery and generalization error are tight up to logarithmic factors. To the best of our knowledge, these support recovery and generalization bounds are the first such matching upper and lower bounds (up to logarithmic factors) for any sparse regression algorithm under the RSC assumption. ** Read more**

We provide a formulation of empirical Bayes Atchadé to tune the hyperparameters of priors used in Bayesian set-up of collaborative filter. ** Read more**

The paper has been accepted for an oral persentation. 84/511 submissions ≈ *16% Acceptance Rate*. ** Read more**

Link: [__arXiv__]

The paper has been accepted for **Spotlight** presentation (168/4856 submissions ≈ 3.5% Acceptance Rate). ** Read more**

Link: [__paper__]