Dual spaces and the Fenchel conjugate

Posted on:

Dual spaces lie at the core of linear algebra and allows us to formally reason about the concept of duality in mathematics. Duality shows up naturally and elegantly in measure theory, functional analysis, and mathematical optimization. In this post, I have tried to learn and explore the nature of duality via Dual spaces, its interpretation in general linear algebra, all of which was motivated by the so called convex conjugate, or the Fenchel conjugate in mathematical optimization.

For the sake of abstractness and general applicability, we will consider that we are in a general field F which for example can be R, C etc. Our space of vectors, is the vector space V, which in general can be subspace of Fd for some dN for example.

Dual spaces

Dual space - The dual space V of the vector space V, is the set of all linear transformations f from V to F denoted as V=L(V,F).

Linear functionals - An element f:VF of V is called a linear functional, which takes in a vector xV, and outputs an element in F.

Note that if f,gV, then αf+βg is also in V for α,βF and would be defined as (αf+βg)(x)=αf(x)+βg(x) for all xV. Let us look at a few examples :

  1. If V=Pn is the space of all univariate polynomial over the field R with degree nN, then f defined as f(p)=p(1)+2p(0), is a linear functional contained in V=L(Pn,R).
  2. If V=Rm×n be the space of all real matrices of size m×n, then f defined as f(M)=Tr[M], is a linear functional contained in V=L(Rm×n,R).
  3. If V=C([π,π]) is the space of all real valued continuous functions on [π,π], then f defined as f(g)=ππg(x)cos(2x)dx, is a linear functional contained in V=L([π,π],R). This function essentially outputs the 2nd Fourier coefficient of its input function.

Dual basis

Let V be a finite dimensional vector space, and β={vi}i=1n be a basis of V. Therefore, to determine the action of any linear functional fV, it is enough to know its action on the basis vectors {vi}i=1n, i.e., it is enough to know what {f(vi)F}i=1n are.

Dual basis - If V is finite dimensional and β={vi}i=1n is a basis of V, then β={fi}i=1n, the set of functionals defined by fi(vj)=δij  i,j[n], is the corresponding basis of V also known as a dual basis of V.

To verify that β is indeed a basis of V, we just need to check the following

  1. β is a linearly independent set of linear functionals, i.e., if i=1naifi=f0 for some set {ajF}j=1n, then aj=0  j[n], where f0 is the 0 functional. To verify this, we just need to check this condition for all the basis elements of V, vβ, which gives us i=1naifi(vj)=0  j[n], and implies that aj=0  j[n] establishing the linear independence of the set β.
  2. β spans V, i.e., for any fV, we have some set of elements {ajF}j=1n such that f(v)=i=1naifi(v)  vV. To check this, we evaluate f on basis vectors vjβ for j[n], which gives us f(vj)=i=1naifi(vj)=i=1naiδij=aj  j[n], or, aj=f(vj)  j[n]. Therefore, if f is in the span of β, then it must be the case that the coefficients satisfy aj=f(vj)  j[n]. Now, if g=i=1nf(vi)fi, then to show that g=f, it is enough to show that they agree on the basis vectors {vj} of V, that is f(vi)=g(vi). This is simply true by the construction of {fi}i=1n, therefore β spans V.

This establishes that β={fi}i=1n is indeed a basis of V, and any fV can be decomposed into the basis of V as

f=i=1nf(vi)fi

where β={vi} is the basis of V. The existence of dual basis β and the relation with β also shows that V and V are isomorphic.

Transpose

The existence of the dual basis β helps us define the concept of transpose in linear algebra which is extensively used in application.

Transpose - Let T:VW be a linear transformation from the vector space V to the vector space W, then T:WV is a linear transformation from W to V that takes in a linear functional gW, and outputs another linear functional T(g)V defined as T(g)(x)=g(T(x))  xV.

xTT(x)gW g(T(x))xT(g)Vg(T(x))VTWgW FVT(g)VF

Theorem - Let T:VW be a linear transformation from vector space V to vector space W. Let β={vi}i=1n and β={fi}i=1n be the basis of V and V respectively, and γ={wj}j=1m and γ={gj}j=1m be the basis of W and W respectively. Let A=[T]βγ be the matrix which transforms a vector in V to a vector in W. Then [T]γβ=A.

The matrix of [T]γβ takes in an element of W represented in its basis γ, and outputs an element in V represented in its basis β. Therefore, it is sufficient to represent every element in {T(gj)}j=1m in the basis β of V. Since any vector f in V can be expressed as a linear combination of the basis vectors β as we proved earlier, we have

T(gj)=i=1nT(gj)(vi)fi=i=1ngj(T(vi))fi

Therefore, the (i,j)-th element of the matrix [T]γβ is gj(T(vi)). Now, the elements of the matrix [T]βγ are the representations of the basis vectors β of V, in the basis vectors γ of W. Therefore we have,

T(vi)=k=1mAkiwkgj(T(vi))=gj(k=1mAkiwk)=k=1mAkigj(wk)(g is a linear functional)=Aji(gj(wk)=δjk)

Which finally gives us

T(gj)=i=1nAjifi

and shows us that the (i,j)-th element of the matrix [T]γβ is nothing but the (j,i)-th element of A, which implies that the matrix [T]γβ=A.

Double Dual

If V is a vector space, then its dual space V is also a vector space. We can again define the set of all linear transformations from V to F as its dual space V=L(V,F), which makes it the double dual space of V. Unlike V where we needed to define basis, there is an elegant way to go from V to V via an injective map Ψ. If xV, and x^=Ψ(x)V acts on a linear functional fV and evaluates f at x returning an element in the field F, i.e.,

fx^Vf(x)Vx^VF

Theorem - If V is a vector space of finite dimension, then Ψ:VV is an isomorphism. First, we show that Ψ is linear. If x,yV, fV, and cF, then

Ψ(x+cy)(f)=f(x+cy)=f(x)+cf(y)=(Ψ(x)+cΨ(y))(f)

Next, we show that Ψ is one to one. Suppose x^=0, the zero functional, then x=0. Let β={vi}i=1n, then x=i=1naivi for some {aiF}. Therefore, if Ψ(x)=x^=0, then

Ψ(x)=Ψ(i=1naivi)=i=1naiΨ(vi)Ψ(x)(f)=i=1naiΨ(vi)(f) fV=i=1naif(vi)

For f{fi}i=1n, we get ai=0  i[n]. This implies that x=i=1naivi=0. Since dim(V)=dim(V), we have that Ψ is one to one and therefore an isomorphism. The isomorphism between V and V, Ψ, allows us to map each element in V to a unique element in V.

The infinite sequence of dual spaces

We can continue thinking about the dual of the parent space and there always will exist an isomorphism between alternate dual spaces. Formally, Let us denote V(i+1)=(V(i)) for all iN where V(0)=V is a finite dimensional vector space. Let Ψi be the isomorphism between V(i1) and V(i+1). Then, in general define x^(i+1)=Ψi(x^(i1)) for x^(i1)V(i1) when i is odd, and f^(i+1)=Ψi(f^(i1)) for f^(i1)V(i1) when i is an even natural number, with x^(0)=xV, and f^(1)=fV. Then by the property of these isomorphisms we have

x^(i+1)(f^(i))=Ψi(x^(i1))(f^(i))=f^(i)(x^(i1))==x^(2)(f^(1))=x^(2)(f)

when i is odd, and

f^(i+1)(x^(i))=Ψi(f^(i1))(x^(i))=x^(i)(f^(i1))==f(1)(x)=f(x)

when i is an even natural number. However, note that x^(2)(f) and f(x) are the same quantities in F, the difference lies in the order of application of these linear functionals, which makes them the dual representation of each other.

Application to Fenchel conjugate

With this formalization, one can reason about the mathematical object, Fenchel conjugate in an elegant way. Let us first define what a Fenchel conjugate of a function is.

Fenchel conjugate - Let V=L(V,F) be the dual space of V, and   ,  :V×VF denote the dual pairing between the two vector spaces. For a function f:VF{F}, its convex conjugate f:VF{F} is defined as

f(g)=supxV{g,xf(x)} gV

For a differentiable function f:VF, we have from the Taylor’s expansion that f(x) is a linear functional for all xV, and therefore lies in the dual space V. If f:VF is the convex conjugate of f, we have that f(g) is an element in V for all gV. Since there exists a natural isomorphism Ψ:VV between V and V which is the evaluation map, this in particular gives us Ψ(x)=x^=f(f(x))V.

Leave a Comment