Loading [MathJax]/jax/output/HTML-CSS/jax.js

Convex Optimization - Programming Problem


Advertisements

There are four types of convex programming problems −

Step 1minf(x), where xS and S be a non-empty convex set in Rn and f(x) is convex function.

Step 2minf(x),xRn subject to

gi(x)0,1m1 and gi(x) is a convex function.

gi(x)0,m1+1m2 and gi(x) is a concave function.

gi(x)=0,m2+1m and gi(x) is a linear function.

where f(x) is a convex fucntion.

Step 3maxf(x) where xS and S be a non-empty convex set in Rn and f(x) is concave function.

Step 4minf(x), where xRn subject to

gi(x)0,1m1 and gi(x) is a convex function.

gi(x)0,m1+1m2 and gi(x) is a concave function.

gi(x)=0,m2+1m and gi(x) is a linear function.

where f(x) is a concave function.

Cone of feasible direction

Let S be a non-empty set in Rn and let ˆxClosure(S), then the cone of feasible direction of S at ˆx, denoted by D, is defined as D={d:d0,ˆx+λdS,λ(0,δ),δ>0}

Each non-zero vector dD is called feasible direction.

For a given function f:RnR the cone of improving direction at ˆx is denoted by F and is given by

F={d:f(ˆx+λd)f(ˆx),λ(0,δ),δ>0}

Each direction dF is called an improving direction or descent direction of f at ˆx

Theorem

Necessary Condition

Consider the problem minf(x) such that xS where S be a non-empty set in Rn. Suppose f is differentiable at a point ˆxS. If ˆx is a local optimal solution, then F0D=ϕ where F0={d:f(ˆx)Td<0} and D is a cone of feasible direction.

Sufficient Condition

If F0D=ϕ f is a pseudoconvex function at ˆx and there exists a neighbourhood of ˆx,Nε(ˆx),ε>0 such that d=xˆxD for any xSNε(ˆx), then ˆx is local optimal solution.

Proof

Necessary Condition

Let F0Dϕ, ie, there exists a dF0D such that dF0 and dD

Since dD, therefore there exists δ1>0 such that ˆx+λdS,λ(0,δ1).

Since dF0, therefore f(ˆx)Td<0

Thus, there exists δ2>0 such that f(ˆx+λd)<f(ˆx),λf(0,δ2)

Let δ=min{δ1,δ2}

Then ˆx+λdS,f(ˆx+λd)<f(ˆx),λf(0,δ)

But ˆx is local optimal solution.

Hence it is contradiction.

Thus F0D=ϕ

Sufficient Condition

Let F0Dϕ nd let f be a pseudoconvex function.

Let there exists a neighbourhood of ˆx,Nε(ˆx) such that d=xˆx,xSNε(ˆx)

Let ˆx is not a local optimal solution.

Thus, there exists ˉxSNε(ˆx) such that f(ˉx)<f(ˆx)

By assumption on SNε(ˆx),d=(ˉxˆx)D

By pseudoconvex of f,

f(ˆx)>f(ˉx)f(ˆx)T(ˉxˆx)<0

f(ˆx)Td<0

dF0

hence F0Dϕ

which is a contradiction.

Hence, ˆx is local optimal solution.

Consider the following problem:minf(x) where xX such that gx(x)0,i=1,2,...,m

f:XR,gi:XRn and X is an open set in Rn

Let S={x:gi(x)0,i}

Let ˆxX, then M={1,2,...,m}

Let I={i:gi(ˆx)=0,iM} where I is called index set for all active constraints at ˆx

Let J={i:gi(ˆx)<0,iM} where J is called index set for all active constraints at ˆx.

Let F0={dRm:f(ˆx)Td<0}

Let G0={dRm:gI(ˆx)Td<0}

or G0={dRm:gi(ˆx)Td<0,iI}

Lemma

If S={xX:gi(x)0,iI} and X is non-empty open set in Rn. Let ˆxS and gi are different at ˆx,iI and let gi where iJ are continuous at ˆx, then G0D.

Proof

Let dG0

Since ˆxX and X is an open set, thus there exists δ1>0 such that ˆx+λdX for λ(0,δ1)

Also since gˆx<0 and gi are continuous at ˆx,iJ, there exists δ2>0, gi(ˆx+λd)<0,λ(0,δ2)

Since dG0, therefore, gi(ˆx)Td<0,iI thus there exists δ3>0,gi(ˆx+λd)<gi(ˆx)=0, for λ(0,δ3)iJ

Let δ=min{δ1,δ2,δ3}

therefore, ˆx+λdX,gi(ˆx+λd)<0,iM

ˆx+λdS

dD

G0D

Hence Proved.

Theorem

Necessary Condition

Let f and gi,iI, are different at ˆxS, and gj are continous at ˆxS. If ˆx is local minima of S, then F0G0=ϕ.

Sufficient Condition

If F0G0=ϕ and f is a pseudoconvex function at (ˆx,gi9x),iI are strictly pseudoconvex functions over some ε - neighbourhood of ˆx,ˆx is a local optimal solution.

Remarks

  • Let ˆx be a feasible point such that f(ˆx)=0, then F0=ϕ. Thus, F0G0=ϕ but ˆx is not an optimal solution

  • But if g(ˆx)=0, then G0=ϕ, thus F0G0=ϕ

  • Consider the problem : min f(x) such that g(x)=0.

    Since g(x)=0, thus g1(x)=g(x)<0 and g2(x)=g(x)0.

    Let ˆxS, then g1(ˆx)=0 and g2(ˆx)=0.

    But g1(ˆx)=g2(ˆx)

    Thus, G0=ϕ and F0G0=ϕ.