There are four types of convex programming problems −
Step 1 − minf(x), where x∈S and S be a non-empty convex set in Rn and f(x) is convex function.
Step 2 − minf(x),x∈Rn subject to
gi(x)≥0,1≤m1 and gi(x) is a convex function.
gi(x)≤0,m1+1≤m2 and gi(x) is a concave function.
gi(x)=0,m2+1≤m and gi(x) is a linear function.
where f(x) is a convex fucntion.
Step 3 − maxf(x) where x∈S and S be a non-empty convex set in Rn and f(x) is concave function.
Step 4 − minf(x), where x∈Rn subject to
gi(x)≥0,1≤m1 and gi(x) is a convex function.
gi(x)≤0,m1+1≤m2 and gi(x) is a concave function.
gi(x)=0,m2+1≤m and gi(x) is a linear function.
where f(x) is a concave function.
Let S be a non-empty set in Rn and let ˆx∈Closure(S), then the cone of feasible direction of S at ˆx, denoted by D, is defined as D={d:d≠0,ˆx+λd∈S,λ∈(0,δ),δ>0}
Each non-zero vector d∈D is called feasible direction.
For a given function f:Rn⇒R the cone of improving direction at ˆx is denoted by F and is given by
F={d:f(ˆx+λd)≤f(ˆx),∀λ∈(0,δ),δ>0}
Each direction d∈F is called an improving direction or descent direction of f at ˆx
Necessary Condition
Consider the problem minf(x) such that x∈S where S be a non-empty set in Rn. Suppose f is differentiable at a point ˆx∈S. If ˆx is a local optimal solution, then F0∩D=ϕ where F0={d:▽f(ˆx)Td<0} and D is a cone of feasible direction.
Sufficient Condition
If F0∩D=ϕ f is a pseudoconvex function at ˆx and there exists a neighbourhood of ˆx,Nε(ˆx),ε>0 such that d=x−ˆx∈D for any x∈S∩Nε(ˆx), then ˆx is local optimal solution.
Necessary Condition
Let F0∩D≠ϕ, ie, there exists a d∈F0∩D such that d∈F0 and d∈D
Since d∈D, therefore there exists δ1>0 such that ˆx+λd∈S,λ∈(0,δ1).
Since d∈F0, therefore ▽f(ˆx)Td<0
Thus, there exists δ2>0 such that f(ˆx+λd)<f(ˆx),∀λ∈f(0,δ2)
Let δ=min{δ1,δ2}
Then ˆx+λd∈S,f(ˆx+λd)<f(ˆx),∀λ∈f(0,δ)
But ˆx is local optimal solution.
Hence it is contradiction.
Thus F0∩D=ϕ
Sufficient Condition
Let F0∩D≠ϕ nd let f be a pseudoconvex function.
Let there exists a neighbourhood of ˆx,Nε(ˆx) such that d=x−ˆx,∀x∈S∩Nε(ˆx)
Let ˆx is not a local optimal solution.
Thus, there exists ˉx∈S∩Nε(ˆx) such that f(ˉx)<f(ˆx)
By assumption on S∩Nε(ˆx),d=(ˉx−ˆx)∈D
By pseudoconvex of f,
f(ˆx)>f(ˉx)⇒▽f(ˆx)T(ˉx−ˆx)<0
⇒▽f(ˆx)Td<0
⇒d∈F0
hence F0∩D≠ϕ
which is a contradiction.
Hence, ˆx is local optimal solution.
Consider the following problem:minf(x) where x∈X such that gx(x)≤0,i=1,2,...,m
f:X→R,gi:X→Rn and X is an open set in Rn
Let S={x:gi(x)≤0,∀i}
Let ˆx∈X, then M={1,2,...,m}
Let I={i:gi(ˆx)=0,i∈M} where I is called index set for all active constraints at ˆx
Let J={i:gi(ˆx)<0,i∈M} where J is called index set for all active constraints at ˆx.
Let F0={d∈Rm:▽f(ˆx)Td<0}
Let G0={d∈Rm:▽gI(ˆx)Td<0}
or G0={d∈Rm:▽gi(ˆx)Td<0,∀i∈I}
If S={x∈X:gi(x)≤0,∀i∈I} and X is non-empty open set in Rn. Let ˆx∈S and gi are different at ˆx,i∈I and let gi where i∈J are continuous at ˆx, then G0⊆D.
Let d∈G0
Since ˆx∈X and X is an open set, thus there exists δ1>0 such that ˆx+λd∈X for λ∈(0,δ1)
Also since gˆx<0 and gi are continuous at ˆx,∀i∈J, there exists δ2>0, gi(ˆx+λd)<0,λ∈(0,δ2)
Since d∈G0, therefore, ▽gi(ˆx)Td<0,∀i∈I thus there exists δ3>0,gi(ˆx+λd)<gi(ˆx)=0, for λ∈(0,δ3)i∈J
Let δ=min{δ1,δ2,δ3}
therefore, ˆx+λd∈X,gi(ˆx+λd)<0,i∈M
⇒ˆx+λd∈S
⇒d∈D
⇒G0⊆D
Hence Proved.
Necessary Condition
Let f and gi,i∈I, are different at ˆx∈S, and gj are continous at ˆx∈S. If ˆx is local minima of S, then F0∩G0=ϕ.
Sufficient Condition
If F0∩G0=ϕ and f is a pseudoconvex function at (ˆx,gi9x),i∈I are strictly pseudoconvex functions over some ε - neighbourhood of ˆx,ˆx is a local optimal solution.
Let ˆx be a feasible point such that ▽f(ˆx)=0, then F0=ϕ. Thus, F0∩G0=ϕ but ˆx is not an optimal solution
But if ▽g(ˆx)=0, then G0=ϕ, thus F0∩G0=ϕ
Consider the problem : min f(x) such that g(x)=0.
Since g(x)=0, thus g1(x)=g(x)<0 and g2(x)=−g(x)≤0.
Let ˆx∈S, then g1(ˆx)=0 and g2(ˆx)=0.
But ▽g1(ˆx)=−▽g2(ˆx)
Thus, G0=ϕ and F0∩G0=ϕ.