Posts by Tags

Large Scale Optimization

A note on Conformal Symplectic and Relativistic Optimization

Posted on:

This note on a spotlight paper at NeurIPS 2020, has been made while I had been reading the literature on the principle connections between continuous and discrete optimization. The motivation is to understand and create accelerated discrete large scale optimization algorithms from first principles via considering the geometry of phase spaces and numerical integration, specifically symplectic integration. Recent works successfully have been able to throw sufficient light on the two and therefore has attracted my attention. Read more

Analysis of Newton’s Method

Posted on:

In optimization, Netwon’s method is used to find the roots of the derivative of a twice differentiable function given the oracle access to its gradient and hessian. By having super-liner memory in the dimension of the ambient space, Newton’s method can take the advantage of the second order curvature and optimize the objective function at a quadratically convergent rate. Here I consider the case when the objective function is smooth and strongly convex. Read more

Nesterov’s Acceleration

Posted on:

This post contains an error vector analysis of the Nesterov’s accelerated gradient descent method and some insightful implications that can be derived from it. Read more

A survey on Large Scale Optimization

Posted on:

This post contains a summary and survey of the theoretical understandings of Large Scale Optimization by referring some talks, papers, and lectures that I have come across in the recent. Read more

Machine Learning

Nesterov’s Acceleration

Posted on:

This post contains an error vector analysis of the Nesterov’s accelerated gradient descent method and some insightful implications that can be derived from it. Read more

A survey on Large Scale Optimization

Posted on:

This post contains a summary and survey of the theoretical understandings of Large Scale Optimization by referring some talks, papers, and lectures that I have come across in the recent. Read more

Mathematics

Note on the Kadison-Singer Problem and its Solution

Posted on:

The Kadison-Singer problem arose from the work on quantum mechanics done by Paul Dirac in the 1930s. The problem is equivalent to fundamental problems in areas like Operator theory, Hilbert and Banach space theory, Frame theory, Harmonic Analysis, Discrepancy theory, Graph theory, Signal Processing and theoretical Computer Science. The Kadison-Singer problem had been long standing and defied the efforts of most Mathematicians until it was recently solved by Adam Wade Marcus, Daniel Alan Spielman and Nikhil Srivastava in 2013. Read more

A note on Conformal Symplectic and Relativistic Optimization

Posted on:

This note on a spotlight paper at NeurIPS 2020, has been made while I had been reading the literature on the principle connections between continuous and discrete optimization. The motivation is to understand and create accelerated discrete large scale optimization algorithms from first principles via considering the geometry of phase spaces and numerical integration, specifically symplectic integration. Recent works successfully have been able to throw sufficient light on the two and therefore has attracted my attention. Read more

Geometry of Relativistic Spacetime Physics

Posted on:

This article introduces and describes the mathematical structures and frameworks needed to understand the modern fundamental theory of Relativistic Spacetime Physics. The self-referential and self-contained nature of Mathematics provides enough power to prescribe a rigorous language needed to formulate the building components of the standard Einstein’s General Theory of Relativity like Spacetime, Matter, and Gravity, along with their behaviors and interactions. In these notes, we will introduce and understand these abstract components, starting with defining the arena of smooth manifolds and then adding the necessary and suffcient differential geometric structures needed to build the primers to the General Theory of Relativity. Read more

Dual spaces and the Fenchel conjugate

Posted on:

Dual spaces lie at the core of linear algebra and allows us to formally reason about the concept of duality in mathematics. Duality shows up naturally and elegantly in measure theory, functional analysis, and mathematical optimization. In this post, I have tried to learn and explore the nature of duality via Dual spaces, its interpretation in general linear algebra, all of which was motivated by the so called convex conjugate, or the Fenchel conjugate in mathematical optimization. Read more

Probability theory

Note on the Kadison-Singer Problem and its Solution

Posted on:

The Kadison-Singer problem arose from the work on quantum mechanics done by Paul Dirac in the 1930s. The problem is equivalent to fundamental problems in areas like Operator theory, Hilbert and Banach space theory, Frame theory, Harmonic Analysis, Discrepancy theory, Graph theory, Signal Processing and theoretical Computer Science. The Kadison-Singer problem had been long standing and defied the efforts of most Mathematicians until it was recently solved by Adam Wade Marcus, Daniel Alan Spielman and Nikhil Srivastava in 2013. Read more

A survey on Strongly Rayleigh measures and their mixing time analysis

Posted on:

Strongly Rayleigh measures are natural generalizations of measures that satisfy the notion of negative dependence. The class of Strongly Rayleigh measures provides the most useful characterization of Negative Dependence by grounding it in the theory of multivariate stable polynomials. This post attempts to throw some light on the origin of Strongly Rayleigh measures and Determinantal Point Processes and highlights the fast mixing time analysis of the natural MCMC chain in the support of a Strongly Rayleigh measure as shown by Anari, Gharan and Rezaei 2016. Read more

Deriving the Fokker-Planck equation

Posted on:

In the theory of dynamic systems, Fokker-Planck equation is used to describe the time evolution of the probability density function. It is a partial differential equation that describes how the density of a stochastic process changes as a function of time under the influence of a potential field. Some common application of it are in the study of Brownian motion, Ornstein–Uhlenbeck process, and in statistical physics. The motivation behind understanding the derivation is to study Levy flight processes that has caught my recent attention. Read more

Theoretical Physics

A note on Conformal Symplectic and Relativistic Optimization

Posted on:

This note on a spotlight paper at NeurIPS 2020, has been made while I had been reading the literature on the principle connections between continuous and discrete optimization. The motivation is to understand and create accelerated discrete large scale optimization algorithms from first principles via considering the geometry of phase spaces and numerical integration, specifically symplectic integration. Recent works successfully have been able to throw sufficient light on the two and therefore has attracted my attention. Read more

Geometry of Relativistic Spacetime Physics

Posted on:

This article introduces and describes the mathematical structures and frameworks needed to understand the modern fundamental theory of Relativistic Spacetime Physics. The self-referential and self-contained nature of Mathematics provides enough power to prescribe a rigorous language needed to formulate the building components of the standard Einstein’s General Theory of Relativity like Spacetime, Matter, and Gravity, along with their behaviors and interactions. In these notes, we will introduce and understand these abstract components, starting with defining the arena of smooth manifolds and then adding the necessary and suffcient differential geometric structures needed to build the primers to the General Theory of Relativity. Read more