view libcruft/fsqp/manual.tex @ 2512:fda09c1e787e

[project @ 1996-11-14 08:39:41 by jwe]
author jwe
date Thu, 14 Nov 1996 08:39:47 +0000
parents 12ff450cbb1f
children
line wrap: on
line source

\input macros.tex
\documentstyle[12pt]{manual}
\pagestyle{myheadings}
\markboth{User's Guide for FSQP}{User's Guide for FSQP}
\renewcommand{\baselinestretch}{1.08} % more interline spacing
 \textheight=8.3in
\topmargin=-.2in
\textwidth=6.5in
\oddsidemargin=-.15cm
\tolerance=1000 % to avoid overfull boxes
 \pagenumbering{arabic}
\begin{document}
\thispagestyle{empty}
\begin{titlepage}
\begin{center}
{\large \bf User's Guide for FSQP Version 3.2:\\
\vspace{1mm}
    A FORTRAN Code for Solving Constrained Nonlinear \\
\vspace{1mm} 
    (Minimax) Optimization Problems, Generating Iterates \\
\vspace{1mm}
    Satisfying All Inequality and Linear Constraints\footnote{
This research was supported in part by NSF's Engineering Research Centers 
Program No. NSFD-CDR-88-03012, by NSF grant No. DMC-88-15996 and by a grant
from the Westinghouse Corporation.}}\\
\vspace{4mm}
          {\it Jian L. Zhou and Andr\'{e} L. Tits} \\
\vspace{4mm}
    Electrical Engineering Department\\
            and\\
    Institute for Systems Research\\
    University of Maryland, College Park, MD 20742\\
     (Systems Research Center TR-92-107r2)
\end{center}
\vspace{3mm}    
\noindent{\bf Abstract}
\vspace{1em}    

\hspace{4mm}FSQP 3.2 is a set of FORTRAN subroutines
for the minimization of the maximum of a set of smooth 
objective functions (possibly a single one) subject to 
general smooth constraints.
If the initial guess provided by the user is infeasible for
some inequality constraint or some linear equality constraint, FSQP first 
generates a feasible point for these constraints; 
subsequently the successive iterates generated by
FSQP all satisfy these constraints. Nonlinear equality constraints
are turned into inequality constraints (to be satisfied by all iterates)
and the maximum of the objective functions is replaced 
by an exact penalty function which
penalizes nonlinear equality constraint violations only. 
The user has the option of either
requiring that the (modified) objective function decrease 
at each iteration after feasibility for nonlinear inequality and
linear constraints has been reached (monotone line search), or
requiring a decrease within at most four iterations (nonmonotone line search).
He/She must provide subroutines that define the objective functions
and constraint functions and may either provide subroutines
to compute the gradients of these functions or require that FSQP
estimate them by forward finite differences.

\hspace{4mm} FSQP 3.2 implements two algorithms based on Sequential
Quadratic Programming~(SQP),~modified so as to generate
feasible iterates. In the first one (monotone line search), a certain 
Armijo type arc search is used with the property that the step of one
is eventually accepted, a requirement for superlinear convergence.
In the second one the same effect is achieved by means 
of a (nonmonotone) search along a straight line. The merit function
used in both searches is the maximum of the objective functions if
there is no nonlinear equality constraint.
\end{titlepage}

\begin{titlepage}
\centerline{\bf Conditions for External Use}
\bigskip
\begin{enumerate}
\item The FSQP routines may not be distributed to third parties. 
      Interested parties should contact the authors directly.
\item If modifications are performed on the routines, these
      modifications will be communicated to the authors. 
      The modified routines will remain
      the sole property of the authors.
\item Due acknowledgment must be made of the use of the FSQP routines in
      research reports or publications. A copy of such reports or 
      publications should be forwarded to the authors.
\item The FSQP routines may not be used in industrial production,
      unless this has been agreed upon with the authors in writing.
\end{enumerate}

\bigskip\noindent
{\bf User's Guide for FSQP Version 3.2 (Released March 1993)} \\
Copyright {\copyright} 1989 --- 1993 by Jian L. Zhou and Andr\'e L. Tits\\
All Rights Reserved.
%Copyright {\copyright} 1993, University of Maryland at College Park.
%All Rights Reserved. \\
%(Developed by Jian L. Zhou and Andr\'e L. Tits.)

\bigskip
\bigskip
\noindent Enquiries should be directed to 

\bigskip
\hspace{5em}Prof. Andr\'e L. Tits

\hspace{5em}Electrical Engineering Dept.

\hspace{5em}and Institute for Systems Research

\hspace{5em}University of Maryland

\hspace{5em}College Park, Md 20742

\hspace{5em}U. S. A.

\smallskip
\hspace{5em}Phone$\,\,$:~~~301-405-3669

\hspace{5em}Fax~~~$\,\;$:~~~301-405-6707

\hspace{5em}E-mail$\,$:~~~andre@src.umd.edu
\end{titlepage}

%\begin{titlepage}
\tableofcontents
%\end{titlepage}

\newpage
\section{Introduction}
\label{intro}
FSQP~(Feasible Sequential Quadratic Programming) 3.2
is a set of FORTRAN subroutines
for the minimization of the maximum of a set of smooth 
objective functions (possibly a single one) subject to 
nonlinear equality and inequality constraints, 
linear equality and inequality constraints, 
and simple bounds on the variables. Specifically, FSQP
tackles optimization problems of the form
\smallskip
$$
  (P)~~~~~~ \min ~ \max\limits_{i\in I^f} \{f_i(x)\} 
                             \mbox{~~~s.t.~~}x\in X
$$
where $I^f=\{1,\ldots,n_f\}$ and $X$ is the set of point $x\in R^n$ 
satisfying
$$\begin{array}{l}
      bl \leq x \leq bu  \\
      g_j(x) \leq 0,~~~j=1,\ldots,n_i\\
      g_j(x)\equiv \langle c_{j-n_i},x\rangle - d_{j-n_i} \leq 0, 
      ~~~j=n_i+1,\ldots,t_i \\
      h_j(x)=0,~~~j=1,\ldots,n_e\\
      h_j(x)\equiv\langle a_{j-n_e},x \rangle-b_{j-n_e}=0, ~~~j=n_e+1,\ldots,t_
e
\end{array}$$
with $bl\in R^n$; $bu\in R^n$; 
$f_i:R^n\rightarrow R,$ $i=1,\ldots,n_f$ smooth;
$g_j:R^n\rightarrow R,~j=1,\ldots,n_i$ nonlinear and smooth;
$c_j\in R^n$, $d_j\in R$, $j=1,\ldots,t_i-n_i$;
$h_j:R^n\rightarrow R,~j=1,\ldots,n_e$ nonlinear and smooth;
$a_j\in R^n$, $b_j\in R$, $j=1,\ldots,t_e-n_e$. 

If the initial guess provided by the user is infeasible for nonlinear 
inequality constraints and linear constraints, FSQP first 
generates a point satisfying all these constraints
by iterating on the problem of minimizing
the maximum of these constraints. Then, 
using Mayne-Polak's scheme\Lspace \Lcitemark 1\Rcitemark \Rspace{},
nonlinear equality constraints are turned into 
inequality constraints\footnote{For every $j$ for which $h_j(x_0)>0$
($x_0$ is the initial point), ``$h_j(x)=0$'' is first replaced by 
``$-h_j(x)=0$'' and $-h_j$ is renamed $h_j$.}
$$h_j(x)\leq 0,~~~~j=1,\ldots,n_e$$
and the original objective function $\max_{i\in I^f}\{f_i(x)\}$ 
is replaced by the modified objective function
$$f_m(x,p)=\max\limits_{i\in I^f}\{f_i(x)\}-\sum_{j=1}^{n_e}p_jh_j(x),$$
where $p_j$, $j=1,\ldots,n_e$, are positive penalty parameters
and are iteratively adjusted.
The resulting optimization problem therefore involves only 
linear constraints and nonlinear inequality constraints.
Subsequently, the successive iterates generated by
FSQP all satisfy these constraints. The user has the option of 
either requiring that the exact penalty function 
(the maximum value of the objective functions if without nonlinear equality
constraints) decrease at each iteration after feasibility for
original nonlinear inequality and linear constraints has been reached,
or requiring a decrease within at most three iterations.
He/She must provide subroutines that define the objective functions
and constraint functions and may either provide subroutines
to compute the gradients of these functions or require that FSQP
estimate them by forward finite differences.

Thus, FSQP 3.2 solves the original problem with nonlinear equality constraints
by solving a modified optimization problem with only linear constraints 
and nonlinear inequality constraints.  For the transformed problem,
it implements algorithms that are described 
and analyzed in\Lspace \Lcitemark 2\Rcitemark \Rspace{},
\Lcitemark 3\Rcitemark \Rspace{} and\Lspace \Lcitemark 4\Rcitemark \Rspace{}, w
ith some additional refinements.
These algorithms are based on a Sequential Quadratic Programming~(SQP)
iteration, modified so as to generate feasible iterates. 
The merit function is the objective function. 
An Armijo-type line search is used to generate an initial feasible point 
when required.
After obtaining feasibility, either $(i)$ an Armijo-type line
search may be used, yielding a monotone decrease of the 
objective function at each iteration\Lspace \Lcitemark 2\Rcitemark \Rspace{}; 
or $(ii)$ a nonmonotone line 
search (inspired from\Lspace \Lcitemark 5\Rcitemark \Rspace{} and analyzed
in\Lspace \Lcitemark 3\Rcitemark \Rspace{} and\Lspace \Lcitemark 4\Rcitemark \R
space{} in this context)
may be selected, forcing a decrease of 
the objective function within at most four iterations.
In the monotone line search scheme, the SQP direction is first 
``tilted'' if nonlinear constraints are present
to yield a feasible direction, then possibly ``bent'' to ensure 
that close to a solution the step of one is accepted, 
a requirement for superlinear convergence.
The nonmonotone line search scheme achieves superlinear convergence 
with no bending of the search direction, thus avoiding function 
evaluations at auxiliary points and subsequent solution of 
an additional quadratic program. After turning nonlinear equality
constraints into inequality constraints, these algorithms are
used directly to solve the modified problems. Certain procedures
(see, e.g.,\Lspace \Lcitemark 6\Rcitemark \Rspace{})
are adopted to obtain proper values of $p_j$'s in order to
ensure that a solution of the modified problem is also a solution
of the original problem, as described below.

For the solution of the quadratic programming subproblems, FSQP 3.2
is set up to call QLD\Lspace \Lcitemark 7\Rcitemark \Rspace{} which is provided
 
with the FSQP distribution for the user's convenience.

\section{Description of the Algorithms}
\label{algo}
The algorithms described and analyzed 
in\Lspace \Lcitemark 2\Rcitemark \Rspace{},\Lspace \Lcitemark 3\Rcitemark \Rspa
ce{} 
and\Lspace \Lcitemark 4\Rcitemark \Rspace{} are as follows.
Given a feasible iterate $x$, the basic SQP direction
$d^0$ is first computed by solving a standard quadratic program 
using a positive definite estimate $H$ of 
the Hessian of the Lagrangian. 
$d^0$ is a direction of descent for the objective function; it is 
almost feasible in the sense that it is at worst tangent to 
the feasible set if there are nonlinear constraints and it is feasible 
otherwise.

In\Lspace \Lcitemark 2\Rcitemark \Rspace{},
an essentially arbitrary feasible descent direction $d^1=d^{1}(x)$ is 
then computed. Then for a certain 
scalar $\rho =\rho (x)\in [0,1]$, a feasible descent 
direction $d=(1-\rho)d^0+\rho d^1$ is obtained, asymptotically 
close to $d^0.$ Finally a second order 
correction $\tilde d=\tilde{d}(x,d,H)$ is computed, involving
auxiliary function evaluations at $x+d,$ 
and an Armijo type search is performed along the 
arc $x+td+t^2 \tilde d.$
The purpose of $\tilde d$ is to allow a full step of one to be taken
close to a solution, thus allowing superlinear convergence to 
take place. Conditions are given 
in\Lspace \Lcitemark 2\Rcitemark \Rspace{} on 
$d^{1}(\cdot)$, $\rho(\cdot)$ and $\tilde d(\cdot ,\cdot)$ 
that result in a globally convergent, 
locally superlinear convergent algorithm.

The algorithm in\Lspace \Lcitemark 3\Rcitemark \Rspace{} is somewhat
more sophisticated. An essential difference is that while feasibility
is still required, the requirement of decrease of the max objective
value is replaced by the weaker requirement that the max
objective value at the new point be lower than its maximum over the last
four iterates. The main payoff is that the auxiliary function 
evaluations
can be dispensed with, except possibly at the first few iterations.
First a direction $d^1=d^1(x)$ is computed, which is feasible even at
Karush-Kuhn-Tucker points. Then for a certain 
scalar $\rho ^{\ell} =\rho  ^{\ell}(x)\in [0,1],$ 
a ``local'' feasible 
direction $d ^{\ell}=(1-\rho ^{\ell})d^0+\rho  ^{\ell}d^1$ is obtained, 
and at $x+d^{\ell}$ the objective functions are tested 
and feasibility is
checked. If the requirements pointed out above are satisfied, $x+d^\ell$
is accepted as next iterate. This will always be the case close to a
solution. Whenever $x+d^\ell$ is not accepted, a ``global'' 
feasible {\it descent}
direction $d ^g=(1-\rho ^g)d^0+\rho  ^gd^1$ is obtained with
$\rho ^g =\rho  ^g(x)\in [0,\rho ^{\ell}].$ 
A second order correction $\tilde d=\tilde{d}(x,d^g,H)$ is computed
the same way as in\Lspace \Lcitemark 2\Rcitemark \Rspace{},
and a ``nonmonotone'' search is performed along the 
arc $x+td^g+t^2 \tilde d.$ 
Here the purpose of $\tilde d$ 
is to suitably initialize the sequence for the ``four iterate'' rule.
Conditions are given in\Lspace \Lcitemark 3\Rcitemark \Rspace{} on 
$d^{1}(\cdot)$, $\rho ^{\ell}(\cdot)$, $\rho ^g(\cdot)$, 
and $\tilde d(\cdot ,\cdot)$ that result in a 
globally convergent, locally superlinear convergent algorithm.
In\Lspace \Lcitemark 4\Rcitemark \Rspace{}, the algorithm of\Lspace \Lcitemark 
3\Rcitemark \Rspace{} is refined
for the case of unconstrained minimax problems.
The major difference over the algorithm of\Lspace \Lcitemark 3\Rcitemark \Rspac
e{} 
is that there is no need of $d^1$.
As in\Lspace \Lcitemark 3\Rcitemark \Rspace{}, $\tilde d$ is required to initia
lize superlinear
convergence.
 
The FSQP implementation corresponds to a specific choice of
$d^1(\cdot)$, $\rho(\cdot)$, $\tilde{d}(\cdot,\cdot)$, 
$\rho^\ell(\cdot)$, and $\rho^g(\cdot)$,
with some modifications as follows. If the first algorithm
is used, $d^1$ is computed as 
a function not only of $x$ but also of $d^0$~(thus of $H$), as it
appears beneficial to keep $d^1$ relatively close to $d^0$.
In the case of the second algorithm, the construction
of $d^{\ell}$ is modified so that the function
evaluations at different auxiliary points can 
be avoided during early iteration 
when $\rho ^g\neq \rho ^{\ell}$; 
the quadratic program that yields $\tilde{d}$ involves only a 
subset of ``active'' functions, thus decreasing the number
of function evaluations.
The details are given below.
The analysis in\Lspace \Lcitemark 2\Rcitemark \Rspace{},
\Lcitemark 3\Rcitemark \Rspace{} and\Lspace \Lcitemark 4\Rcitemark \Rspace{}
can be easily extended  to these modified algorithms.
Also obvious simplifications are introduced concerning
linear constraints: the iterates are allowed (for inequality constraints)
or are forced (for equality constraints) to stay
on the boundary of these constraints and these constraints
are not checked in the line search. Finally, FSQP automatically switches to
a ``phase 1'' mode if the initial guess provided by 
the user is not in the feasible set.

Below we call FSQP-AL
the algorithm with the Armijo line search, and FSQP-NL the algorithm
with nonmonotone line search. We make use of the notations
$$f_{I^f}(x)=\max\limits _{i\in I^f} \{f_i(x)\}$$
$$f'(x,d,p)=\max\limits_{i\in I^f}\{f_i(x)+
    \langle \nabla f_i(x),d\rangle\} - f_{I^f}(x)
    -\sum\limits_{j=1}^{n_e}p_j\langle\nabla h_j(x),d\rangle$$
and, for any subset $I\subset I^f$,
$$\tilde {f}'_I(x+d,x,\tilde d,p)=\max\limits_{i\in I}\{f_i(x+d)+
    \langle \nabla f_i(x),\tilde d\rangle\} - f_{I}(x+d)
    -\sum\limits_{j=1}^{n_e}p_j\langle\nabla h_j(x),\tilde d\rangle.$$
At each iteration $k$, the quadratic program $QP(x_k,H_k,p_k)$ that yields
the SQP direction $d^0_k$ is defined
at $x_k$ for $H_k$ symmetric positive definite by
\smallskip
$$\begin{array}{ll}
   \min\limits_{d^0\in R^n}~~ &  {1 \over {2}}\langle {d^0},H_k {d^0}
                 \rangle+f'(x_k,d^0,p_k)  \\
  {\rm ~~s.t.} &  bl \leq x_k+d^0 \leq bu \\
              & g_j(x_k)+\langle\nabla g_j(x_k),d^0 \rangle
             \leq 0, ~~~j=1,\ldots , t_i \\
              & h_j(x_k)+\langle\nabla h_j(x_k),d^0 \rangle
             \leq 0, ~~~j=1,\ldots ,n_e \\
              & \langle a_j,x_k + d^0 \rangle=b_j,
             ~~~j=1,\ldots , t_e-n_e. \end{array}$$
Let $\zeta _{k,j}$'s with $\sum_{j=1}^{n_f} \zeta _{k,j} =1$, 
$\xi_{k,j}$'s, $\lambda _{k,j}$'s, and $\mu_{k,j}$'s denote 
the multipliers, for the various objective functions, simple 
bounds (only $n$ possible active bounds at each iteration), inequality,
and equality constraints respectively, associated 
with this quadratic program. 
Define the set of active objective functions, 
for any $i$ such that $\zeta_{k,i}>0$, by
$$
I^f_k(d_k)=\{j\in I^f: |f_j(x_k)-f_i(x_k)|\leq 
0.2\|d_k\|\cdot\|\nabla f_j(x_k)-\nabla f_i(x_k)\|\}
\cup\{j\in I^f:\zeta_{k,j}>0\}
$$
and the set of active constraints by
$$
I^g_k(d_k)\!=\!\{j\!\in\!\{1,\ldots,t_i\}:|g_j(x_k)|\leq
0.2\|d_k\|\cdot\|\nabla g_j(x_k)\|\}
\cup\{j\in\{1,\ldots,t_i\}:\lambda_{k,j}>0\}.
$$

\vspace{1em}
\noindent{\bf Algorithm FSQP-AL.}

\vspace{1em}
\noindent{\it Parameters.} $\eta =0.1$, $\nu=0.01$, $\alpha=0.1$,
$\beta=0.5$, $\kappa = 2.1$, $\tau _1=\tau _2 = 2.5$, $\underline t=0.1$,
$\epsilon_1=1$, $\epsilon_2=10$, $\delta=5$.

\smallskip
\noindent{\it Data.} $x_0\in R^n$, $\epsilon > 0$, $\epsilon_e>0$ and
$p_{0,j}=\epsilon_2$ for $j=1,\ldots,n_e$. 

\smallskip
\noindent{\it Step 0: Initialization.} Set $k=0$ and $H_0=$ the 
identity matrix. Set $nset=0$. If $x_0$ is infeasible for some constraint
other than a nonlinear equality constraint,
substitute a feasible point, obtained as discussed below.
For $j=1,\ldots,n_e$, replace $h_j(x)$ by $-h_j(x)$ whenever
$h_j(x_0)>0$.

\smallskip
\noindent{\it Step 1: Computation of a search arc.}

\begin{itemize}
\item[\it i.]Compute $d_{k}^{0}$, the solution of the quadratic program
$QP(x_k,H_k,p_k)$.  
If $\|d_k^0\|\leq \epsilon$
and $\sum_{j=1}^{n_e}|h_j(x_k)|\leq \epsilon_e$, stop. 
If $n_i+n_e=0$ and $n_f=1,$ set $d_k=d^0_k$ and $\tilde d_k =0$ and 
go to {\it Step~2}. If $n_i+n_e=0$ and $n_f > 1$, set $d_k=d^0_k$ and 
go to {\it Step~1~iv}.

\item[\it ii.]Compute $d_{k}^{1}$ by solving the strictly convex 
quadratic program
\smallskip
$$  \begin{array}{ll} \min\limits_{d^1\in R^n,\gamma \in R}  
                    & \frac{\eta}{2} 
                  \langle d_{k}^{0}-d^1,d_{k}^{0}-d^1 \rangle +\gamma \\
  {\rm ~~~~s.t.} &  bl \leq x_k+d^1 \leq bu\\
   &  f'(x_k,d^1,p_k) \leq \gamma\\
   &  g_j(x_k)+\langle \nabla g_j(x_k),d^1 \rangle
      \leq\gamma, ~~~~j=1,\ldots,n_i\\
   & \langle c_j,x_k  + d^1 \rangle \leq d_j,
      ~~~~j=1,\ldots,t_i-n_i \\
   &  h_j(x_k)+\langle \nabla h_j(x_k),d^1 \rangle
      \leq\gamma, ~~~~j=1,\ldots,n_e\\
   & \langle a_j,x_k  + d^1 \rangle=b_j,
                      ~~~~j=1,\ldots,t_e-n_e\end{array}$$
\smallskip
\item[\it iii.] Set $d_k=(1-\rho_k)d_k^0+\rho_kd_k^1$~
           with $\rho_k=\|d_k^0\|^{\kappa}/(\|d_k^0\|^{\kappa}+v_k)$,~
        where $v_k = \max(0.5,~\|d_k^1\|^{\tau _1}).$

\item[\it iv.] 
Compute $\tilde d_k$ by solving the strictly convex 
quadratic program
\smallskip
$$\begin{array}{ll} \min\limits_{\tilde d \in R^n} &  \frac{1}{2}
                  \langle (d_k+\tilde d),H_{k}(d_k+\tilde d)\rangle 
                 +f'_{I^f_k(d_k)}(x_k,d_k,\tilde d,p_k) \\
  {\rm ~s.t.} &  bl \leq x_k+d_k+\tilde d \leq bu\\
             & g_j(x_k+d_k) +\langle \nabla g_j(x_k),\tilde d\rangle\leq
                    -\min(\nu\|d_k\|,~\|d_k\|^{\tau _2}),~
                    j\in I^g_k(d_k)\cap\{j:j\leq n_i\}\\
%            & \hspace{20em} j\in I^g_k(d_k)\cap\{j:j\leq n_i\}\\
             & \langle c_{j-n_i},x_k+d_k + \tilde d \rangle \leq d_{j-n_i},
                     ~~~~j\in I^g_k(d_k)\cap\{j:j>n_i\}\\
             & h_j(x_k+d_k) +\langle \nabla h_j(x_k),\tilde d\rangle\leq
                    -\min(\nu\|d_k\|,~\|d_k\|^{\tau _2}),~j=1,\ldots,n_e\\
             & \langle a_j,x_k+d_k + \tilde d \rangle=b_j,
                      ~~~~j=1,\ldots,t_e-n_e\end{array}$$
where $f'_{I^f_k(d_k)}(x_k,d_k,\tilde d,p_k)=f'(x_k,d_k+\tilde d,p_k)$ 
if $n_f = 1,$ and
$f'_{I^f_k(d_k)}(x_k,d_k,\tilde d,p_k)=\tilde{f}'_{I^f_k(d_k)}(x_k+d_k,x_k,\til
de d,p_k)$ 
if $n_f > 1$.
If the quadratic program has no solution or 
if $\|\tilde d_k\|>\|d_{k}\|$, set $\tilde d_k=0$.
\end{itemize}

\noindent{\it Step 2. Arc search.} 
Let $\delta _k=f'(x_k,d_k,p_k)$ if $n_i+n_e\ne 0$ 
and $\delta _k=-\langle d_k^0,H_kd_k^0\rangle$ otherwise.
Compute $t_{k}$, the first number $t$ in 
the sequence $\mbox\{1,\beta,\beta^{2},\ldots\}$ satisfying
\begin{eqnarray*}
\textstyle
& f_m(x_{k}+td_{k}+t^{2}\tilde d_{k},p_k)\leq f_m(x_k,p_k)+\alpha t\delta_k & \
\
& g_j(x_k+td_k+t^2\tilde d_k)\leq0,~~j=1,\ldots,n_i & \\
&\langle c_{j-n_i},x_k+td_k + t^2\tilde{d}_k \rangle \leq d_{j-n_i},
                ~~~~\forall j>n_i~\&~j\not\in I^g_k(d_k)\\
&h_j(x_k+td_k+t^2\tilde d_k)\leq0,~~j=1,\ldots,n_e. &
\end{eqnarray*}
Specifically, the line search proceeds as follows.
First, the linear constraints that were not used
in computing $\tilde{d}_k$ are checked until all of them are
satisfied, resulting in a stepsize, say, $\bar{t}_k$. Due to 
the convexity of linear constraints, these constraints
will be satisfied for any $t\leq \bar{t}_k$. Then, for $t=\bar{t}_k$,
nonlinear constraints are checked first and,
for both objectives and constraints, those with nonzero 
multipliers in the QP yielding $d^0_k$ are evaluated first.
For $t<\bar{t}_k$, the function that caused the previous value of $t$ to
be rejected is checked first; all functions of the same type
(``objective'' or ``constraint'') as the latter
will then be checked first.

\smallskip
\smallskip
\noindent{\it Step 3. Updates.} 
\begin{itemize}
\item[$\cdot$] If $nset>5n$ and $t_k<\underline t$, set $H_{k+1}=H_0$ 
and $nset=0$.
Otherwise, set $nset=nset+1$ and compute a new approximation $H_{k+1}$ 
to the Hessian of the Lagrangian using the BFGS formula with Powell's 
modification\Lspace \Lcitemark 8\Rcitemark \Rspace{}.
\item[$\cdot$] Set $x_{k+1}=x_{k}+t_{k}d_{k}+t_{k}^{2}\tilde d_{k}$.
\item[$\cdot$] Solve the unconstrained 
quadratic problem in $\bar{\mu}$
$$\begin{array}{cl}
\min\limits_{\bar{\mu}\in R^{t_e}} &
\|\sum\limits_{j=1}^{n_f}\zeta _{k,j}\nabla f_j(x_k)+
\xi_k+\sum\limits_{j=1}^{t_i}\lambda_{k,j}\nabla g_j(x_k)
  +\sum\limits_{j=1}^{t_e}\bar{\mu}_j\nabla h_j(x_k)\|^2,
\end{array}$$
where the $\zeta_{k,j}$'s, $\xi_k$ and the $\lambda_{k,j}$'s
are the multipliers associated with $QP(x_k,H_k,p_k)$ for the objective
functions, variable bounds, and inequality constraints 
respectively.\footnote{This is a refinement (saving much computation
and memory) of the scheme proposed in\Lspace \Lcitemark 1\Rcitemark \Rspace{}.}
Update $p_k$ as follows: for $j=1,\ldots,n_e$,
$$p_{k+1,j}=\left\{\begin{array}{ll}
p_{k,j} & \mbox{if } p_{k,j}+\bar\mu_j \geq \epsilon_1\\
\max\{\epsilon_1-\bar\mu_j,~\delta p_{k,j}\} & \mbox{otherwise.}
\end{array}\right.$$
\item[$\cdot$] Increase $k$ by 1.
\item[$\cdot$] Go back to {\it Step 1}.
\end{itemize}

\hfill{\large \bf $\Box$}
 
\vspace{1em}
\noindent{\bf Algorithm FSQP-NL.}

\vspace{1em}
\noindent{\it Parameters.} $\eta =3.0$, $\nu=0.01$,
$\alpha=0.1$, $\beta=0.5$, $\theta=0.2$, $\bar{\rho}=0.5$, $\gamma = 2.5$,
$\underline{C}=0.01$, $\underline{d}=5.0$, $\underline t=0.1$,
$\epsilon_1=0.1$, $\epsilon_2=10$, $\delta=5$.

\smallskip
\noindent{\it Data.} $x_0\in R^n$, $\epsilon > 0$, $\epsilon_e>0$ and
$p_{0,j}=\epsilon_2$ for $j=1, \ldots, n_e$. 

\smallskip
\noindent{\it Step 0: Initialization.} Set $k=0$, $H_0=$ the identity 
matrix, and $C_0 = \underline{C}.$ If $x_0$ is infeasible for 
constraints other than nonlinear equality constraints, substitute a
feasible point, obtained as discussed below. 
Set $x_{-3}=x_{-2}=x_{-1}=x_0$ and $nset=0$.
For $j=1,\ldots,n_e$, replace $h_j(x)$ by $-h_j(x)$ whenever
$h_j(x_0)>0$.

\smallskip
\noindent{\it Step 1: Computation of a new iterate.}

\begin{itemize}
\item[\it ~~~i.] Compute $d_{k}^{0}$, the solution of quadratic program
$QP(x_k,H_k,p_k)$.
%Compute the Kuhn-Tucker vector 
%$$\begin{array}{lll}
%\nabla L(x_k,\zeta_k,\xi_k,\lambda_k,\mu_k,p_k)& = &
%\sum\limits_{j=1}^{n_f} \zeta _{k,j}\nabla f_j(x_k)+
%\sum\limits_{j=1}^{n} \xi _{k,j}+\sum\limits_{j=1}^{t_i} 
%  \lambda _{k,j}\nabla g_j(x_k) \\
%& & ~~~+\sum\limits_{j=1}^{n_e}(\mu_{k,j}-p_{k,j})\nabla h_j(x_k)
%    +\sum\limits_{j=n_e+1}^{t_e}\mu_{k,j}\nabla h_j(x_k).\end{array}$$

%If $\|\nabla L(x_k,\zeta_k,\xi_k,\lambda_k,\mu_k,p_k)\|\leq \epsilon$
If $\|d_k^0\|\leq \epsilon$
and $\sum_{j=1}^{n_e}|h_j(x_k)|\leq\epsilon_e$, stop. 
If $n_i+n_e=0$ and $n_f=1,$ set $d_k=d^0_k$ and $\tilde d_k =0$ and 
go to {\it Step~1~viii}. If $n_i+n_e=0$ and $n_f >1,$ 
set $\rho _k^{\ell}=\rho _k^g=0$ and go to {\it Step~1~v}.

\item[\it ~~ii.] Compute $d_{k}^{1}$ by solving the strictly convex 
quadratic program
\smallskip
$$  \begin{array}{ll} \min\limits_{d^1\in R^n,\gamma \in R}  
                                 & \frac{\eta}{2}\|d^1\|^2+\gamma \\
  {\rm ~~~~s.t.} &  bl \leq x_k+d^1 \leq bu\\
      &  g_j(x_k)+\langle \nabla g_j(x_k),d^1 \rangle
       \leq\gamma, ~~~~j=1,\ldots,n_i\\
      & \langle c_j,x_k  + d^1 \rangle \leq d_j,
            ~~~~j=1,\ldots,t_i-n_i \\
      &  h_j(x_k)+\langle \nabla h_j(x_k),d^1 \rangle
       \leq\gamma, ~~~~j=1,\ldots,n_e\\
      & \langle a_j,x_k  + d^1 \rangle=b_j,
                      ~~~~j=1,\ldots,t_e-n_e\end{array}$$

\item[\it ~iii.] Set $v_{k}=\min \{C_k\|d^0_k\|^2,\|d^0_k\|\}$. 
Define values
$\rho^g_{k,j}$ for $j=1,\ldots,n_i$ by $\rho^g_{k,j}$ equal to zero if
\smallskip
$$g_j(x_k)+\langle \nabla g_j(x_k),d^0_k\rangle \leq -v_k$$
\smallskip
or equal to the maximum $\rho$ in $[0,1]$ such that
\smallskip
$$g_j(x_k)+\langle \nabla g_j(x_k),(1-\rho)d^0_k+
            \rho d^1_k\rangle \geq -v_k$$
\smallskip
otherwise. Similarly, define values $\rho^h_{k,j}$ for $j=1,\ldots,n_e$.
Let $$\rho ^{\ell}_k=\max\left\{\max _{j=1,\ldots,n_i}\{\rho^g_{k,j}\},~
\max _{j=1,\ldots,n_e}\{\rho^h_{k,j}\}\right\}.$$

\item[\it ~~iv.] Define $\rho _k^g$ as the largest number $\rho$
in $[0,\rho ^{\ell}_k]$ such that
\smallskip
$$f'(x_k,(1-\rho)d^0_k+\rho d^1_k,p_k)\leq \theta f'(x_k,d^0_k,p_k).$$
If ($k\geq 1$ \& $t_{k-1}<1$) or ($\rho _k^{\ell} > \bar{\rho}$), set 
$\rho _k^\ell = \min \{\rho _k^\ell, \rho _k^g\}.$

\item[\it ~~~v.] Construct a ``local'' direction
\smallskip
$$d_k^{\ell}=(1-\rho _k^{\ell})d^0_k+\rho _k^{\ell} d^1_k.$$
Set $M=3$, $\delta_k=f'(x_k,d_k^0)$ if $n_i+n_e\ne 0$, 
and $M=2$, $\delta_k=-\langle d_k^0,H_kd_k^0\rangle$ otherwise.
If
$$f_m(x_k+d^{\ell}_k,p_k)\leq 
\max\limits_{\ell=0,\ldots,M}\{f_m(x_{k-\ell},p_k)\} +
       \alpha \delta_k$$
$$g_j(x_k+d^{\ell}_k)\leq 0,~~j=1,\ldots,n_i$$
and
$$h_j(x_k+d^{\ell}_k)\leq 0,~~j=1,\ldots,n_e,$$
\smallskip
set $t_k=1$, $x_{k+1}=x_k+d_k^{\ell}$ and go to {\it Step 2}.

\item[\it ~~vi.] Construct a ``global'' direction
\smallskip
$$d_k^{g}=(1-\rho _k^{g})d^0_k+\rho _k^{g}d^1_k.$$

\item[\it ~vii.] 
Compute $\tilde d_{k}$ by solving the strictly convex 
quadratic program
\smallskip
$$  \begin{array}{cl} \min\limits_{\tilde d \in R^n} &  \frac{1}{2}
                  \langle (d_k^g+\tilde d),H_{k}(d^g_k+\tilde d)\rangle 
                 +f'_{I^f_k(d_k^g)}(x_k,d_k^g,\tilde d,p_k) \\
  \mbox{s.t.} &  bl \leq x_k+d_k^g+\tilde d \leq bu\\
       & g_j(x_k+d_k^g) +\langle \nabla g_j(x_k),\tilde d\rangle\leq
         -\min(\nu\|d_k^g\|,~\|d_k^g\|^{\tau}),
              ~~~j\in I^g_k(d^g_k)\cap\{j:j\leq n_i\}\\
       & \langle c_{j-n_i},x_k+d_k^g + \tilde d \rangle \leq d_{j-n_i},
              ~~~~j\in I^g_k(d^g_k)\cap\{j:j>n_i\}\\
       & h_j(x_k+d_k^g) +\langle \nabla h_j(x_k),\tilde d\rangle\leq
         -\min(\nu\|d_k^g\|,~\|d_k^g\|^{\tau}),
              ~~~j=1,\ldots,n_e\\
       & \langle a_j,x_k+d_k^g + \tilde d \rangle=b_j,
              ~~~~j=1,\ldots,t_e-n_e\end{array}$$
where $f'_{I^f_k(d_k^g)}(x_k,d_k^g,\tilde d,p_k)=f'(x_k,d_k^g+\tilde d,p_k)$ if
 $n_f=1,$
and $f'_{I^f_k(d_k^g)}(x_k,d_k^g,\tilde d,p_k)=
\tilde{f}'_{I^f_k(d_k^g)}(x_k+d_k^g,x_k,\tilde d,p_k)$ 
if $n_f>1$. If the quadratic program has no solution or 
if $\|\tilde d_k\|>\|d_k^g\|$, set $\tilde d_k=0$.

\item[\it viii.] Set $M=3$, $\delta_k=f'(x_k,d^g_k,p_k)$ if $n_i+n_e\ne 0$,
and $M=2$, $\delta_k=-\langle d^g_k,H_kd^g_k\rangle$ otherwise.
Compute $t_k$, the first number $t$ in 
the sequence $\mbox\{1,\beta,\beta^{2},\ldots\}$ satisfying
\smallskip
\begin{eqnarray*}
\textstyle
& f_m(x_{k}+td^g_k+t^{2}\tilde d_k,p_k)\leq 
 \max\limits_{\ell=0,\ldots,M}\{f_m(x_{k-\ell},p_k)\}+
\alpha t \delta_k &\\
& g_{j}(x_{k}+td_k^g+t^{2}\tilde d_{k})\leq0,~~j=1,\ldots,n_i & \\
&\langle c_{j-n_i},x_k+td_k^g +t^2 \tilde{d}_k \rangle \leq d_{j-n_i},
              ~~~~j>n_i~\&~j\not\in I^g_k(d^g_k) &\\
& h_{j}(x_{k}+td_k^g+t^{2}\tilde d_{k})\leq0,~~j=1,\ldots,n_e &
\end{eqnarray*}
and set $x_{k+1}=x_k+t_kd_k^g+t_k^2\tilde d_k.$ \\
Specifically, the line search proceeds as follows.
First, the linear constraints that were not used
in computing $\tilde{d}_k$ are checked until all of them are
satisfied, resulting in a stepsize, say, $\bar{t}_k$. Due to 
the convexity of linear constraints, these constraints
will be satisfied for any $t\leq \bar{t}_k$. Then, for $t=\bar{t}_k$,
nonlinear constraints are checked first and,
for both objectives and constraints, those with nonzero 
multipliers in the QP yielding $d^0_k$ are evaluated first.
For $t<\bar{t}_k$, the function that caused the previous value of $t$ to
be rejected is checked first; all functions of the same type
(``objective'' or ``constraint'') as the latter
will then be checked first.
\end{itemize}

\noindent{\it Step 2. Updates.} 
\begin{itemize}
\item[$\cdot$] If $nset>5n$ and $t_k<\underline t$, set $H_{k+1}=H_0$
and $nset=0$. Otherwise, set $nset=nset+1$ and 
compute a new approximation $H_{k+1}$ 
to the Hessian of the Lagrangian using the BFGS formula with Powell's 
modification\Lcitemark 8\Rcitemark . 
\item[$\cdot$] If $\|d^0_k\|>\underline{d}$, 
set $C_{k+1}=\max \{0.5C_k,\underline{C}\}.$
Otherwise, if $g_j(x_k+d_k^\ell) \leq 0,~~j=1,\ldots,n_i$, 
set $C_{k+1}=C_k$. Otherwise, set $C_{k+1}=10C_k$.
\item[$\cdot$] Solve the unconstrained 
quadratic problem in $\bar{\mu}$
$$\begin{array}{cl}
\min\limits_{\bar{\mu}\in R^{t_e}} &
\|\sum\limits_{j=1}^{n_f}\zeta _{k,j}\nabla f_j(x_k)+
\xi_k+\sum\limits_{j=1}^{t_i}\lambda_{k,j}\nabla g_j(x_k)
  +\sum\limits_{j=1}^{t_e}\bar{\mu}_j\nabla h_j(x_k)\|^2,
\end{array}$$
where the $\zeta_{k,j}$'s, $\xi_k$ and the $\lambda_{k,j}$'s
are the multipliers associated with $QP(x_k,H_k,p_k)$ for the objective
functions, variable bounds, and inequality constraints 
respectively.\footnote{See footnote to corresponding step in description
of FSQP-AL.} 

Update $p_k$ as follows: for $j=1,\ldots,n_e$,
$$p_{k+1,j}=\left\{\begin{array}{ll}
p_{k,j} & \mbox{if } p_{k,j}+\bar\mu_j \geq \epsilon_1\\
\max\{\epsilon_1-\bar\mu_j,~\delta p_{k,j}\} & \mbox{otherwise.}
\end{array}\right.$$
\item[$\cdot$] Increase $k$ by 1.
\item[$\cdot$] Go back to {\it Step 1}.
\end{itemize}

\hfill{\large \bf $\Box$}

\noindent{\bf Remark:} The Hessian matrix is reset
in both algorithms whenever stepsize is too small and
the updating of the matrix goes through $n$ iterations.
This is helpful in some situations where the Hessian matrix
becomes singular.
 
\vspace{1em}
If the initial guess $x_0$ provided by the user is not feasible
for some inequality constraint or some linear equality constraint,
FSQP first solves a strictly convex quadratic program
\smallskip
$$\begin{array}{cl}
           \min\limits_{v\in R^n} &  \langle v,v\rangle \\
            \mbox{s.t.} &   bl \leq x_0+v \leq bu\\
                  &   \langle c_j,x_0 + v \rangle \leq d_j,
                        ~~~j=1,\ldots,t_i-n_i\\
                  &   \langle a_j,x_0 + v \rangle=b_j,
                      ~~~j=1,\ldots,t_e-n_e.  \end{array}$$

\vspace{.5em}
\noindent{}Then, starting from the point $x=x_0+v$, it will iterate, 
using algorithm FSQP-AL, on the problem
\smallskip
$$\begin{array}{cl}
    \min\limits_{x\in R^n} &  \max\limits_{j=1,\ldots,n_i}\{g_j(x)\} \\
    \mbox{s.t.}  & ~~bl \leq x \leq bu\\
         & ~~\langle c_j,x\rangle \leq d_j,~~~j=1,\ldots,t_i-n_i\\
         & ~~\langle a_j,x \rangle =b_j,~~~j=1,\ldots , t_e-n_e 
            \end{array}$$
until $\max\limits_{j=1,\ldots,n_i}\{g_j(x)\} \leq 0$ is achieved.
The corresponding iterate $x$ will then be feasible 
for all constraints other than nonlinear equality constraints of 
the original problem. 

\section{Specification of Subroutine FSQPD 3.2}
\label{specs}
Only a double precision version of FSQP, FSQPD is currently available.
The specification of FSQPD is as follows:
\vspace{1em}
\begin{quote}
\begin{verbatim}
  subroutine FSQPD(nparam,nf,nineqn,nineq,neqn,neq,mode,iprint,miter,
 *                 inform,bigbnd,eps,epseqn,udelta,bl,bu,x,f,g,
 *                 iw,iwsize,w,nwsize,obj,constr,gradob,gradcn)
  integer nparam,nf,nineqn,nineq,neqn,neq,mode,iprint,miter,inform,
 *        iwsize,nwsize
  integer iw(iwsize)
  double  precision bigbnd,eps,epseqn,udelta
  double  precision bl(nparam),bu(nparam),x(nparam),
 *        f(nf),g(nineq+neq),w(nwsize)
  external obj,constr,gradob,gradcn
\end{verbatim}
\end{quote}
\vspace{1em}
{\bf Important:} all real variables and arrays must be declared as 
double precision in the routine that calls FSQPD. The following are 
specifications of parameters and workspace.

\vspace{1em}
\begin{description}
\item[\tt nparam] {\bf (Input)}~Number of free variables,
                  i.e., the dimension of {\tt x}.
\item[\tt nf]   {\bf (Input)}~Number of objective 
                   functions ($n_f$ in the algorithm description).
\item[\tt nineqn]    {\bf (Input)}~Number (possibly zero) of 
                       nonlinear inequality constraints ($n_i$ in the
                      algorithm description). 
\item[\tt nineq]  {\bf (Input)}~Total number (possibly equal 
                       to {\tt nineqn}) of 
                       inequality constraints ($t_i$ in the algorithm 
                  description).
\item[\tt neqn]    {\bf (Input)}~Number (possibly zero) of 
                       nonlinear equality constraints ($n_e$ in the
                      algorithm description). 
\item[\tt neq]    {\bf (Input)}~Total number (possibly equal to {\tt neqn}) of 
                  equality constraints ($t_e$ in the algorithm 
                  description).
\item[\tt mode]   {\bf (Input)}~${\tt mode} = 1BA$ with the following 
                  meanings:
                  \begin{quote}
                  \begin{quote}
                  \begin{quote}
                  \begin{itemize}
                  \item[${\tt A} = 0$~:~~] $(P)$ is to be solved.
                  \item[${\tt A} = 1$~:~~] $(PL_\infty)$ is to be solved. 
                  $(PL_\infty)$ is defined as follows
$$
  (PL_\infty)~~~~~ \min ~ \max\limits_{i\in I^f} |f_i(x)| 
                             \mbox{~~~s.t.~~}x\in X
$$
                  where $X$ is the same as for $(P).$ It is handled
                  in this code by splitting $|f_i(x)|$ as $f_i(x)$
                  and $-f_i(x)$ for each $i.$ The user is required
                  to provide only $f_i(x)$ for $i\in I^f$.
                  \item[${\tt B} = 0$~:~~]Algorithm FSQP-AL is 
                                           selected, resulting in a 
                                           decrease of the (modified) objective
 
                                           function at each iteration.
                  \item[${\tt B} = 1$~:~~]Algorithm FSQP-NL is 
                                           selected, resulting in a 
                                           decrease of the (modified) objective
                                           function within at 
                                           most four iterations (or three
                                           iterations, see Algorithm FSQP-NL).
                  \end{itemize}
                  \end{quote}
                  \end{quote}
                  \end{quote}
\item[\tt iprint] {\bf (Input)}~Parameter indicating the 
                  desired output (see \S~\ref{output} for details):
                  \begin{quote}
                  \begin{quote}
                  \begin{quote}
                  \begin{itemize}
                  \item[~~${\tt iprint} =0$~:~~] No information except 
                                for user-input errors is displayed. This value
                                is imposed during phase 1.
                  \item[~~${\tt iprint} =1$~:~~] 
                                Objective and constraint values 
                                at the initial feasible point are displayed.
                                At the end of execution, status ({\tt inform}),
                                iterate, objective values, constraint values,
                                number of evaluations of objectives and 
                                nonlinear constraints, norm of the Kuhn-Tucker 
                                vector, and sum of feasibility violation
                                are displayed.
                  \item[~~${\tt iprint} =2$~:~~] At the end of each 
                                iteration, the same information as with
                                ${\tt iprint}=1$ is displayed.
                  \item[~~${\tt iprint} =3$~:~~] At each iteration, 
                                the same information as with ${\tt iprint}=2$, 
                                including detailed information on the search 
                                direction computation, on the line search,
                                and on the update, is displayed.
                  \item[~~${\tt iprint} =10*N+M,~N~{\rm any~positive~integer},
                                M=2~{\rm or}~3$~:~~] 
                                Information corresponding to {\tt iprint}=$M$
                                is displayed at every $(10\times N)$th
                                iteration and at the last iteration.
                  \end{itemize}
                  \end{quote}
                  \end{quote}
                  \end{quote}
\item[\tt miter] {\bf (Input)}~Maximum number of iterations
allowed by the user before termination of execution.
\item[\tt inform] {\bf (Output)}~Parameter indicating the status of
                   the execution of FSQPD:
                   \begin{quote}
                   \begin{quote}
                   \begin{quote}
                   \begin{itemize}
                  \item[~~${\tt inform} = 0$~:~] Normal termination of 
                               execution in the sense that 
                         $\|d^0\|\leq {\tt eps}$
                         and (if ${\tt neqn} \ne 0$) 
                         $\sum_{j=1}^{n_e}|h_j(x)|\leq {\tt epseqn}$.
                  \item[~~${\tt inform} = 1$~:~] The user-provided 
                                                 initial guess
                                                 is infeasible for
                                                 linear constraints and 
                                                 FSQPD is unable to 
                                                 generate a point
                                                 satisfying all these 
                                                 constraints.
                  \item[~~${\tt inform} = 2$~:~] The user-provided 
                                                 initial guess
                                                 is infeasible for nonlinear 
                                                 inequality constraints and
                                                 linear constraints; and 
                                                 FSQPD is unable to 
                                                 generate a point
                                                 satisfying all these
                                                 constraints.
                  \item[~~${\tt inform} = 3$~:~] The maximum 
                                               number~{\tt miter} 
                                               of iterations has been 
                                               reached before a 
                                               solution is obtained.
                  \item[~~${\tt inform} = 4$~:~] The line search fails 
                                               to find a new 
                                               iterate (trial step size 
                                                being 
                                            smaller than the machine 
                                            precision 
                                        {\tt epsmac} computed by FSQPD).
                  \item[~~${\tt inform} = 5$~:~] Failure of the QP solver
                                                 in attempting 
                                                to construct $d^0$. A more
                                                robust QP solver may succeed.
                  \item[~~${\tt inform} = 6$~:~] Failure of the QP solver
                                                 in attempting 
                                               to construct $d^1$. A more
                                                robust QP solver may succeed.
                  \item[~~${\tt inform} = 7$~:~] Input data are not 
                                                consistent~(with 
                                                 printout
                                                indicating the error).
                   \end{itemize}
                   \end{quote}
                   \end{quote}
                   \end{quote}
\item[\tt bigbnd]  {\bf (Input)}~(see also {\tt bl} 
                  and {\tt bu} below)~It plays the role of 
                  Infinite Bound.
\item[\tt eps]    {\bf (Input)}~Final norm requirement for 
%                  the Kuhn-Tucker vector ($\epsilon$ in the 
                  the Newton direction $d_k^0$ ($\epsilon$ in the 
                  algorithm description). It must be bigger 
                  than the machine
                  precision {\tt epsmac} (computed by FSQPD).
                  (If the user does not have a good feeling of
                  what value should be chosen, a very small
                  number could be provided and $\mbox{\tt iprint}=2$
                  be selected so that the user would be able to keep track of 
                  the process of optimization and terminate FSQPD
                  at appropriate time.)
\item[\tt epseqn] {\bf (Input)}~Maximum violation of nonlinear equality
                  constraints allowed by the user at an optimal point
                  ($\epsilon_e$ in the algorithm description). 
                  It is in effect only if $n_e\ne 0$ and
                  must be bigger than the machine
                  precision {\tt epsmac} (computed by FSQPD). 
\item[\tt udelta]  {\bf (Input)}~The perturbation  
                  size the user suggests to use in 
                  approximating gradients by finite difference.
                  The perturbation size actually used is defined by 
$\mbox{sign}(x^i)\times\max \{{\tt udelta},~
                  {\tt rteps}\times \max (1,\,|x^i|)\}$~
                  for each component $x^i$ of $x$ ({\tt rteps} 
                  is the square root of {\tt epsmac}). {\tt udelta}
                  should be set to zero if the user has no idea
                  how to choose it.
\item[\tt bl]     {\bf (Input)}~Array of 
                  dimension {\tt nparam} containing
                  lower bounds for the components of {\tt x}. 
                  To specify a non-existent lower 
                  bound (i.e., ${\tt bl}(j)=-\infty$ for 
                  some $j$), the value used must 
                  satisfy ${\tt bl}(j)\leq -{\tt bigbnd}$.
\item[\tt bu]     {\bf (Input)}~Array of 
                  dimension {\tt nparam} containing
                  upper bounds for the components of {\tt x}. 
                  To specify a non-existent upper 
                  bound (i.e., ${\tt bu}(j)=\infty$ for 
                  some $j$), the value used must 
                  satisfy ${\tt bu}(j)\geq {\tt bigbnd}$.
\item[\tt x]      {\bf (Input)}~Initial guess.\\
                  {\bf (Output)}~Iterate at the end of execution. 
\item[\tt f]      Array of dimension $\max\{1, {\tt nf}\}$.\\
                  {\bf (Output)}~Value of functions 
                  $f_i,i=1,\ldots,n_f$, at {\tt x} at the end of 
                  execution.
\item[\tt g]      Array of dimension $\max\{1,{\tt nineq}+{\tt neq}\}$.\\
                  {\bf (Output)}~Values of constraints at {\tt x} at 
                   the end of execution.
\item[\tt iw]  Workspace vector of dimension {\tt iwsize}.
\item[\tt iwsize]  {\bf (Input)}~Workspace length 
                for {\tt iw}. It must be at least as big as
       $6\times {\tt nparam}+8\times ({\tt nineq}+{\tt neq})
       +7\times{\tt nf}+30$. This estimate is usually very conservative
       and the smallest suitable value will be
       displayed if the user-supplied value is too small.
\item[\tt w]      {\bf (Input)}~Workspace of dimension {\tt nwsize}. \\
                  {\bf (Output)}~Estimate of Lagrange multipliers at 
                  the end of execution of phase 2 in the 
                  first ${\tt nparam}+{\tt nineq+neq+nff}$ entries;
            where ${\tt nff}=0$ if (in {\tt mode}) ${\tt A}=0$ and
          ${\tt nf}=1$, and ${\tt nff}={\tt nf}$ otherwise.
       They are ordered as $\xi$'s (variables), $\lambda$'s (inequality
      constraints), $\mu$'s (equality constraints), and $\zeta$ 
      (objective functions).
        $\lambda _j \geq 0~~\forall j=1,\ldots,t_i$
        and $\mu _j \ge 0~~\forall j=1,\ldots,t_e.$ $\xi _i > 0$
        indicates that $x_i$ reaches its upper bound and $\xi _i <0$
        indicates that $x_i$ reaches its lower bound. When
        (in {\tt mode}) ${\tt A}=0$ and ${\tt nf}>1$, $\zeta _i \geq0.$
        When ${\tt B}=1$, $\zeta _i >0$ refers to
        $+f_i(x)$ and $\zeta _i<0$ to $-f_i(x)$.
\item[\tt nwsize] {\bf (Input)}~Workspace length for {\tt w}. 
                   It must be at least as big as
                    $4\times {\tt nparam}^{2}+
                    5\times ({\tt nineq}+{\tt neq})\times{\tt nparam}+
                    3\times{\tt nf}\times{\tt nparam}+
         26\times ({\tt nparam}+{\tt nf})+45\times ({\tt nineq}+{\tt neq})+100$
.  This estimate
         is usually very conservative and the
         smallest suitable value will be 
         displayed if the user-supplied value is too small.
\item[\tt obj]   {\bf (Input)}~Name of the user-defined subroutine
                 that computes the value of the objective 
        functions $f_i(x),~~\forall i=1,\ldots,n_f.$ This name must
        be declared as {\bf external} in the calling routine 
        and passed as an argument to FSQPD.
        The detailed specification is given in \S~\ref{subobj} below.
\item[\tt constr]   {\bf (Input)}~Name of the user-defined subroutine
        that computes the value of the constraints. This name must
        be declared as {\bf external} in the calling routine 
        and passed as an argument to FSQPD.
        The detailed specification is given in \S~\ref{subconstr} below. 
\item[\tt gradob]   {\bf (Input)}~Name of the subroutine that
        computes the gradients of the objective 
        functions $f_i(x),~~\forall i=1,\ldots,n_f.$ This name must
        be declared as {\bf external} in the calling routine 
        and passed as an argument to FSQPD.
        The user must pass the subroutine name 
        {\tt grobfd}~(and declare it as {\bf external}), 
        if he/she wishes that FSQPD evaluate
        these gradients automatically, by forward finite differences.
        The detailed specification is given in \S~\ref{subgradob} below.
\item[\tt gradcn]   {\bf (Input)}~Name of the subroutine that
          computes the gradients of the constraints. 
          This name must be declared as {\bf external} in the calling 
          routine and passed as an argument to FSQPD.
          The user must pass the subroutine name {\tt grcnfd}~(and
          declare it as {\bf external}), if he/she wishes that 
          FSQPD evaluate these gradients automatically, 
          by forward finite differences.
          The detailed specification is given in \S~\ref{subgradcn} below.
\end{description}

\section{User-Accessible Stopping Criterion}
\label{stopcri}
As is clear from the two algorithms, the optimization process
normally terminates if both 
$\|d_k^0\|\leq\epsilon$
and $\sum_{j=1}^{n_e}|h_j(x_k)|\leq\epsilon_e$ are satisfied. 
Very small value of either of these two parameters may request
exceedingly long execution time, depending on the complexity
of underlying problem and the nonlinearity of various functions. 
FSQP allows users to specify their own stopping criterion in any one of 
the four user-supplied subroutines mentioned above via the following
common block
\begin{verbatim}
            integer nstop
            common /fsqpst/nstop
\end{verbatim}
if (s)he wishes to.
${\tt nstop}=0$ should be returned to FSQP when the stopping criterion 
is satisfied.  FSQP will check the value of {\tt nstop} at appropriate places 
during the optimization process and will terminate when
either the user's criterion or the default criterion is satisfied.

\section{Description of the Output}
\label{output}
No output will be displayed before a feasible starting 
point is obtained. The following information is displayed 
at the end of execution when 
${\tt iprint} = 1$ or at each iteration when ${\tt iprint}=2$:
\begin{description}
\item[\tt iteration]  Total number of iterations (${\tt iprint}=1$) or
                   iteration number (${\tt iprint}=2$).
\item[\tt inform]  See \S~\ref{specs}. It is displayed only
                   at the end of execution.
\item[\tt x]       Iterate.
\item[\tt objectives]  Value of objective functions $f_i(x),~~\forall 
                    i=1,\ldots,n_f$ at {\tt x}. 
\item[\tt objmax]  (displayed only if $\mbox{\tt nf} > 1$)~The 
                   maximum value of the set of objective 
        functions (i.e., $\max f_i(x) \mbox{ or } \max |f_i(x)|,~~
        \forall i=1,\ldots,n_f$) at {\tt x}.
\item[\tt objective max4]  (displayed only if $\mbox{\tt B} = 1$ 
                   in {\tt mode})~Largest value of
                   the maximum of the objective functions over the 
                   last four (or three, see FSQP-NL) 
                   iterations (including the current one).
\item[\tt constraints] Values of the constraints at {\tt x}.
\item[\tt ncallf]  Number of evaluations (so far) of 
                   individual~(scalar) objective function $f_i(x)$ 
                   for $1\leq i \leq n_f.$
\item[\tt ncallg]  Number of evaluations (so far) of 
                   individual~(scalar) nonlinear constraints.
\item[\tt d0norm]  Norm of the Newton direction $d_k^0$.
\item[\tt ktnorm]  Norm of the Kuhn-Tucker vector at the current 
                   iteration. The Kuhn-Tucker vector is given by
$$\begin{array}{lll}
\nabla L(x_k,\zeta_k,\xi_k,\lambda_k,\mu_k,p_k)& = &
\sum\limits_{j=1}^{n_f} \zeta _{k,j}\nabla f_j(x_k)+
\xi_k+\sum\limits_{j=1}^{t_i}\lambda _{k,j}\nabla g_j(x_k) \\
& &~+\sum\limits_{j=1}^{n_e}(\mu_{k,j}-p_{k,j})\nabla h_j(x_k)
    +\sum\limits_{j=n_e+1}^{t_e}\mu_{k,j}\nabla h_j(x_k).\end{array}$$
\item[\tt SCV]     Sum of the violation of nonlinear equality constraints
at a solution.
\end{description}

{\noindent}For ${\tt iprint}=3$, in addition to the same 
         information as the one for ${\tt iprint}=2$, 
         the following is printed at every iteration.

\vspace{1em}
Details in the computation of a search direction:
\begin{description}
\item[\tt d0]      Quasi-Newton direction $d^0_k$.
\item[\tt d1]      First order direction $d^1_k$.
\item[\tt d1norm]  Norm of $d^1_k$.
\item[\tt d]       (${\tt B}=0$ in {\tt mode})~Feasible descent 
                   direction $d_k=(1-\rho _k)d^0_k+\rho _k d^1_k$.
\item[\tt dnorm]   (${\tt B}=0$ in {\tt mode})~Norm of $d_k$.
\item[\tt rho]     (${\tt B}=0$ in {\tt mode})~Coefficient $\rho_k$ in 
                   constructing $d_k$.
\item[\tt dl]      (${\tt B}=1$ in {\tt mode})~Local direction 
                    $d^\ell_k=(1-\rho^\ell_k)d_k^0+\rho^\ell_kd^1_k$.
\item[\tt dlnorm]  (${\tt B}=1$ in {\tt mode})~Norm of $d_k^\ell$.
\item[\tt rhol]    (${\tt B}=1$ in {\tt mode})~Coefficient $\rho_k^{\ell}$ in
                   constructing $d_k^{\ell}$.
\item[\tt dg]      (${\tt B}=1$ in {\tt mode})~Global search direction 
                    $d^g=(1-\rho^g_k)d_k^0+\rho^g_kd^1_k$.
\item[\tt dgnorm]  (${\tt B}=1$ in {\tt mode})~Norm of $d_k^g$.
\item[\tt rhog]    (${\tt B}=1$ in {\tt mode})~Coefficient $\rho_k^g$ in 
                   constructing $d_k^g$.
\item[\tt dtilde]  Second order correction $\tilde d_k$.
\item[\tt dtnorm]  Norm of $\tilde d_k$.
\end{description}

Details in the line search:
\begin{description}
\item[\tt trial step]  Trial steplength $t$ in the search direction.
\item[\tt trial point] Trial iterate along the search arc 
                       with {\tt trial step}.
\item[\tt trial objectives] This gives the indices $i$ and 
                            the corresponding
                            values of the functions 
                   $f_i(x)-\sum_{j=1}^{n_e}p_jh_j(x)$
                   for $1\leq i \leq n_f$ up to the one which fails 
                   in line search at the {\tt trial point}. The 
                   indices $i$
                   are not necessarily in the natural order (see
                   remark at the end of {\it Step 2} in FSQP-AL and of
                   the end of {\it Step~1~viii}\ in FSQP-NL).
\item[\tt trial constraints] This gives the indices $j$ and the 
                   corresponding values of nonlinear constraints 
                   for $1\leq j \leq n_i+n_e$ up to the 
                   one which is not feasible at the {\tt trial point}.
                   The indices $j$
                   are not necessarily in the natural order (see
                   remark at the end of {\it Step 2} in FSQP-AL and of
                   the end of {\it Step~1~viii}\ in FSQP-NL).
\end{description}

Details in the updates:
\begin{description}
\item[\tt delta]  Perturbation size for each variable 
                  in finite difference gradients computation.
\item[\tt gradf]  Gradients of 
                  functions $f_i(x),~\forall i=1,\ldots,n_f,$ 
                  at the new iterate.
\item[\tt gradg]  Gradients of constraints at the new iterate.
\item[\tt p]      Penalty parameters for nonlinear equality constraints at
                  the new iterate.
\item[\tt multipliers] Multiplier estimates ordered as $\xi$'s, 
        $\lambda$'s, $\mu$'s, and $\zeta$'s (from quadratic program
        computing $d^0_k$). $\lambda _j \geq 0~~\forall j=1,\ldots,t_i$
        and $\mu _j \ge 0~~\forall j=1,\ldots,t_e$. $\xi _i > 0$
        indicates that $x_i$ reaches its upper bound and $\xi _i <0$
        indicates that $x_i$ reaches its lower bound. When
        (in {\tt mode}) ${\tt A}=0$ and ${\tt nf}>1$, $\zeta _i \geq0$.
        When (in {\tt mode}) ${\tt A}=1$, $\zeta _i >0$ refers to
        $+f_i(x)$ and $\zeta _i<0$ to $-f_i(x)$. 
       (cf.\ \S~\ref{specs} under item {\tt w}.)
\item[\tt hess]   Estimate of the Hessian matrix of the Lagrangian.
\item[\tt Ck]     The value $C_k$ as defined in Algorithm FSQP-NL.
\end{description}

\section{User-Supplied Subroutines}
At least two of the following four Fortran 77 subroutines, 
namely {\tt obj} and {\tt constr}, 
must be provided by the user in order to define the problem. 
The name of all four routines can be changed at the user's will, 
as they are passed as arguments to FSQPD.

\subsection{Subroutine obj}
\label{subobj}
The subroutine {\bf obj}, to be provided by the user, 
computes the value of the objective functions. 
A (dummy) subroutine must be provided due to Fortran 77 compiling
requirement if $\mbox{\tt nf}=0$ (This may happen when the user
is only interested in finding a feasible point).
The specification of {\bf obj} for FSQPD is
\begin{quote}
\begin{verbatim}
          subroutine obj(nparam,j,x,fj)
          integer nparam,j
          double precision x(nparam),fj
    c     
    c     for given j, assign to fj the value of the jth objective
    c     evaluated at x 
    c
          return
          end
\end{verbatim}
\end{quote}
\noindent Arguments:
\begin{description}
\item[\tt nparam] {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]      {\bf (Input)}~Number of the objective to be computed.
\item[\tt x]      {\bf (Input)}~Current iterate.
\item[\tt fj]     {\bf (Output)}~Value of the {\tt j}th objective function
                  at {\tt x}.
\end{description}

\subsection{Subroutine constr}
\label{subconstr}
The subroutine {\bf constr}, to be provided by the user, 
computes the value of 
the constraints. If there are no constraints, 
a (dummy) subroutine must be
provided anyway due to Fortran 77 compiling requirement.
The specification of {\tt constr} for FSQPD is as follows
\begin{quote}
\begin{verbatim}
         subroutine constr(nparam,j,x,gj)
         integer nparam,j
         double precision x(nparam),gj
   c
   c     for given j, assign to gj the value of the jth constraint 
   c     evaluated at x
   c
         return
         end
\end{verbatim}
\end{quote}
\noindent Arguments:
\begin{description}
\item[\tt nparam]  {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]    {\bf (Input)}~Number of the constraint to be computed. 
\item[\tt x]    {\bf (Input)}~Current iterate.
\item[\tt gj]   {\bf (Output)}~Value of the {\tt j}th constraint at {\tt x}.
\end{description}
\bigskip
The order of the constraints must be as follows. 
First the {\tt nineqn} (possibly zero) nonlinear inequality constraints. 
Then the ${\tt nineq-nineqn}$ (possibly zero) linear inequality constraints. 
Finally, the {\tt neqn} (possibly zero) nonlinear equality constraints
followed by the ${\tt neq-neqn}$ (possibly zero) linear equality constraints.

\subsection{Subroutine gradob} 
\label{subgradob}
The subroutine {\bf gradob} computes the gradients of the 
objective functions.
The user may omit to provide this routine and require that 
forward finite difference
approximation be used by FSQPD via calling {\tt grobfd} instead~
(see argument {\tt gradob} of FSQPD in \S~\ref{specs}).
The specification of {\tt gradob} for FSQPD is as follows
\begin{quote}
\begin{verbatim}
      subroutine gradob(nparam,j,x,gradfj,dummy)
      integer nparam,j
      double precision x(nparam),gradfj(nparam)
      double precision dummy
      external dummy
c     
c     assign to gradfj the gradient of the jth objective function 
c     evaluated at x
c
      return
      end
\end{verbatim}
\end{quote}
\noindent{Arguments}:
\begin{description}
\item[\tt nparam] {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]      {\bf (Input)}~Number of objective for 
                  which gradient is to be computed.
\item[\tt x]      {\bf (Input)}~Current iterate.
\item[\tt gradfj]  {\bf (Output)}~Gradient of the {\tt j}th objective 
                   function at x.
\item[\tt dummy]  {\bf (Input)}~Used by {\tt grobfd}.
\end{description}
Note that {\tt  dummy} is 
passed as arguments to {\tt gradob} to allow for forward finite 
difference computation of the gradient.

\subsection{Subroutine gradcn}
\label{subgradcn}
The subroutine {\bf gradcn} computes the gradients of the constraints. 
The user may omit to provide this routine and require that forward 
finite difference approximation be used by FSQPD via 
calling {\tt grcnfd} instead (see argument {\tt gradcn} of 
FSQPD in \S~\ref{specs}).
The specification of {\tt gradcn} for FSQPD is as follows
\begin{quote}
\begin{verbatim}
      subroutine gradcn (nparam,j,x,gradgj,dummy)
      integer nparam,j
      double precision x(nparam),gradgj(nparam)
      double precision dummy
      external dummy
c
c     assign to gradgj the gradient of the jth constraint
c     evaluated at x
c
      return
      end
\end{verbatim}
\end{quote}
\noindent{Arguments}:
\begin{description}
\item[\tt nparam]  {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]     {\bf (Input)}~Number of constraint for which
                               gradient is to be computed. 
\item[\tt x]       {\bf (Input)}~Current iterate.
\item[\tt gradgj]   {\bf (Output)}~Gradient of the {\tt j}th
                    constraint evaluated at {\tt x}.
\item[\tt dummy]  {\bf (Input)}~Used by {\tt grcnfd}.
\end{description}

\noindent Note that {\tt  dummy} is passed as arguments
to {\tt gradcn} to allow for forward finite difference 
computation of the gradients.

\section{Organization of FSQPD and Main Subroutines}
\subsection{Main Subroutines}
\label{mainorg}
FSQPD first checks for inconsistencies of input parameters using the 
subroutine {\tt check}. It then checks if the starting 
point given by the user satisfies the linear 
constraints and if not, generates a point 
satisfying these constraints using
subroutine {\tt initpt}. It then calls FSQPD1 for generating a point
satisfying linear and nonlinear inequality constraints. Finally, 
it attempts to find
a point satisfying the optimality condition using again FSQPD1.
\begin{description}
\item[\tt check] Check that all upper bounds on variables 
                 are no smaller than lower bounds; 
                 check that all input integers are nonnegative
                 and appropriate (${\tt nineq} \geq {\tt nineqn}$, etc.);
                 and check that {\tt eps} ($\epsilon$) 
                 and (if ${\tt neqn}\ne 0$) {\tt epseqn} 
                 ($\epsilon_e$) are at least as large as 
                 the machine precision {\tt epsmac} (computed by FSQPD).
\item[\tt initpt] Attempt to generate a feasible point satisfying 
                 simple bounds and all linear constraints.
\item[\tt FSQPD1] Main subroutine used possibly twice by FSQPD, 
                  first for generating
                  a feasible iterate as explained at the 
                  end of \S~\ref{algo} and
                  second for generating an optimal iterate 
                  from that feasible iterate.
\end{description}
FSQPD1 uses the following subroutines:
\begin{description}
\item[\tt dir] Compute various directions $d_k^0$, $d^1_0$ and $\tilde d_k$.
\item[\tt step]Compute a step size along a certain search direction. 
               It is also called to check if $x_k+d_k^\ell$ is acceptable 
               in {\it Step 1 v} of Algorithm FSQP-NL.
\item[\tt hesian] Perform the Hessian matrix updating.
\item[\tt out] Print the output for ${\tt iprint=1}$ 
                  or ${\tt iprint}=2$.
\item[\tt grobfd] (optional)~Compute the gradient of an objective 
                function 
                by forward finite differences with mesh size equal to 
$\mbox{sign}(x^i)\times\max \{{\tt udelta},~
               {\tt rteps}\times\max (1,\,|x^i|)\}$~
                for each component $x^i$ of $x$ ({\tt rteps} is the 
                square root of {\tt epsmac}, the machine 
                precision computed by FSQPD).
\item[\tt grcnfd]  (optional)~Compute the gradient of a constraint by 
                   forward finite differences with mesh size equal to 
$\mbox{sign}(x^i)\times\max \{{\tt udelta},~
                {\tt rteps}\times\max (1,\,|x^i|)\}$~
                for each component $x^i$ of $x$ ({\tt rteps} is the 
                square root of {\tt epsmac}, the machine 
                precision computed by FSQPD).
\end{description}

\subsection{Other Subroutines}
\label{othsub}
In addition to QLD, the following subroutines are used:
\begin{verbatim}
  diagnl  di1     dqp     error   estlam  fool    fuscmp  indexs  matrcp
  matrvc  nullvc  resign  sbout1  sbout2  scaprd  shift   slope   small
\end{verbatim}

\subsection{Reserved Common Blocks}
\label{reserved}
The following named common blocks are used in FSQPD and QLD:
\begin{verbatim}
  fsqpp1  fsqpp2  fsqpp3  fsqpq1  fsqpq2  fsqplo  fsqpqp  fsqpst  CMACHE
\end{verbatim}


\input manua2
\input macros.tex
\documentstyle[12pt]{manual}
\pagestyle{myheadings}
\markboth{User's Guide for FSQP}{User's Guide for FSQP}
\renewcommand{\baselinestretch}{1.08} % more interline spacing
 \textheight=8.3in
\topmargin=-.2in
\textwidth=6.5in
\oddsidemargin=-.15cm
\tolerance=1000 % to avoid overfull boxes
 \pagenumbering{arabic}
\begin{document}
\thispagestyle{empty}
\begin{titlepage}
\begin{center}
{\large \bf User's Guide for FSQP Version 3.1:\\
\vspace{1mm}
    A FORTRAN Code for Solving Constrained Nonlinear \\
\vspace{1mm} 
    (Minimax) Optimization Problems, Generating Iterates \\
\vspace{1mm}
    Satisfying All Inequality and Linear Constraints\footnote{
This research was supported in part by NSF's Engineering Research Centers 
Program No. NSFD-CDR-88-03012, by NSF grant No. DMC-88-15996 and by a grant
from the Westinghouse Corporation.}}\\
\vspace{4mm}
          {\it Jian L. Zhou and Andr\'{e} L. Tits} \\
\vspace{4mm}
    Electrical Engineering Department\\
            and\\
    Institute for Systems Research\\
    University of Maryland, College Park, MD 20742\\
     (Systems Research Center TR-92-107r2)
\end{center}
\vspace{3mm}    
\noindent{\bf Abstract}
\vspace{1em}    

\hspace{4mm}FSQP 3.1 is a set of FORTRAN subroutines
for the minimization of the maximum of a set of smooth 
objective functions (possibly a single one) subject to 
general smooth constraints.
If the initial guess provided by the user is infeasible for
some inequality constraint or some linear equality constraint, FSQP first 
generates a feasible point for these constraints; 
subsequently the successive iterates generated by
FSQP all satisfy these constraints. Nonlinear equality constraints
are turned into inequality constraints (to be satisfied by all iterates)
and the maximum of the objective functions is replaced 
by an exact penalty function which
penalizes nonlinear equality constraint violations only. 
The user has the option of either
requiring that the (modified) objective function decrease 
at each iteration after feasibility for nonlinear inequality and
linear constraints has been reached (monotone line search), or
requiring a decrease within at most four iterations (nonmonotone line search).
He/She must provide subroutines that define the objective functions
and constraint functions and may either provide subroutines
to compute the gradients of these functions or require that FSQP
estimate them by forward finite differences.

\hspace{4mm} FSQP 3.1 implements two algorithms based on Sequential
Quadratic Programming~(SQP),~modified so as to generate
feasible iterates. In the first one (monotone line search), a certain 
Armijo type arc search is used with the property that the step of one
is eventually accepted, a requirement for superlinear convergence.
In the second one the same effect is achieved by means 
of a (nonmonotone) search along a straight line. The merit function
used in both searches is the maximum of the objective functions if
there is no nonlinear equality constraint.
\end{titlepage}

\begin{titlepage}
\centerline{\bf Conditions for External Use}
\bigskip
\begin{enumerate}
\item The FSQP routines may not be distributed to third parties. 
      Interested parties should contact the authors directly.
\item If modifications are performed on the routines, these
      modifications will be communicated to the authors. 
      The modified routines will remain
      the sole property of the authors.
\item Due acknowledgment must be made of the use of the FSQP routines in
      research reports or publications. A copy of such reports or 
      publications should be forwarded to the authors.
\item The FSQP routines may not be used in industrial production,
      unless this has been agreed upon with the authors in writing.
\end{enumerate}

\bigskip\noindent
{\bf User's Guide for FSQP Version 3.1 (Released November 1992)} \\
Copyright {\copyright} 1989 --- 1992 by Jian L. Zhou and Andr\'e L. Tits\\
All Rights Reserved.
%Copyright {\copyright} 1992, University of Maryland at College Park.
%All Rights Reserved. \\
%(Developed by Jian L. Zhou and Andr\'e L. Tits.)

\bigskip
\bigskip
\noindent Enquiries should be directed to 

\bigskip
\hspace{5em}Prof. Andr\'e L. Tits

\hspace{5em}Electrical Engineering Dept.

\hspace{5em}and Institute for Systems Research

\hspace{5em}University of Maryland

\hspace{5em}College Park, Md 20742

\hspace{5em}U. S. A.

\smallskip
\hspace{5em}Phone$\,\,$:~~~301-405-3669

\hspace{5em}Fax~~~$\,\;$:~~~301-405-6707

\hspace{5em}E-mail$\,$:~~~andre@src.umd.edu
\end{titlepage}

%\begin{titlepage}
\tableofcontents
%\end{titlepage}

\newpage
\section{Introduction}
FSQP~(Feasible Sequential Quadratic Programming) 3.1
is a set of FORTRAN subroutines
for the minimization of the maximum of a set of smooth 
objective functions (possibly a single one) subject to 
nonlinear equality and inequality constraints, 
linear equality and inequality constraints, 
and simple bounds on the variables. Specifically, FSQP
tackles optimization problems of the form
\smallskip
$$
  (P)~~~~~~ \min ~ \max\limits_{i\in I^f} \{f_i(x)\} 
                             \mbox{~~~s.t.~~}x\in X
$$
where $I^f=\{1,\ldots,n_f\}$ and $X$ is the set of point $x\in R^n$ 
satisfying
$$\begin{array}{l}
      bl \leq x \leq bu  \\
      g_j(x) \leq 0,~~~j=1,\ldots,n_i\\
      g_j(x)\equiv \langle c_{j-n_i},x\rangle - d_{j-n_i} \leq 0, 
      ~~~j=n_i+1,\ldots,t_i \\
      h_j(x)=0,~~~j=1,\ldots,n_e\\
      h_j(x)\equiv\langle a_{j-n_e},x \rangle-b_{j-n_e}=0, ~~~j=n_e+1,\ldots,t_
e
\end{array}$$
with $bl\in R^n$; $bu\in R^n$; 
$f_i:R^n\rightarrow R,$ $i=1,\ldots,n_f$ smooth;
$g_j:R^n\rightarrow R,~j=1,\ldots,n_i$ nonlinear and smooth;
$c_j\in R^n$, $d_j\in R$, $j=1,\ldots,t_i-n_i$;
$h_j:R^n\rightarrow R,~j=1,\ldots,n_e$ nonlinear and smooth;
$a_j\in R^n$, $b_j\in R$, $j=1,\ldots,t_e-n_e$. 

If the initial guess provided by the user is infeasible for nonlinear 
inequality constraints and linear constraints, FSQP first 
generates a point satisfying all these constraints
by iterating on the problem of minimizing
the maximum of these constraints. Then, 
using Mayne-Polak's scheme\Lspace \Lcitemark 1\Rcitemark \Rspace{},
nonlinear equality constraints are turned into 
inequality constraints\footnote{For every $j$ for which $h_j(x_0)>0$
($x_0$ is the initial point), ``$h_j(x)=0$'' is first replaced by 
``$-h_j(x)=0$'' and $-h_j$ is renamed $h_j$.}
$$h_j(x)\leq 0,~~~~j=1,\ldots,n_e$$
and the original objective function $\max_{i\in I^f}\{f_i(x)\}$ 
is replaced by the modified objective function
$$f_m(x,p)=\max\limits_{i\in I^f}\{f_i(x)\}-\sum_{j=1}^{n_e}p_jh_j(x),$$
where $p_j$, $j=1,\ldots,n_e$, are positive penalty parameters
and are iteratively adjusted.
The resulting optimization problem therefore involves only 
linear constraints and nonlinear inequality constraints.
Subsequently, the successive iterates generated by
FSQP all satisfy these constraints. The user has the option of 
either requiring that the exact penalty function 
(the maximum value of the objective functions if without nonlinear equality
constraints) decrease at each iteration after feasibility for
original nonlinear inequality and linear constraints has been reached,
or requiring a decrease within at most three iterations.
He/She must provide subroutines that define the objective functions
and constraint functions and may either provide subroutines
to compute the gradients of these functions or require that FSQP
estimate them by forward finite differences.

Thus, FSQP 3.1 solves the original problem with nonlinear equality constraints
by solving a modified optimization problem with only linear constraints 
and nonlinear inequality constraints.  For the transformed problem,
it implements algorithms that are described 
and analyzed in\Lspace \Lcitemark 2\Rcitemark \Rspace{},
\Lcitemark 3\Rcitemark \Rspace{} and\Lspace \Lcitemark 4\Rcitemark \Rspace{}, w
ith some additional refinements.
These algorithms are based on a Sequential Quadratic Programming~(SQP)
iteration, modified so as to generate feasible iterates. 
The merit function is the objective function. 
An Armijo-type line search is used to generate an initial feasible point 
when required.
After obtaining feasibility, either $(i)$ an Armijo-type line
search may be used, yielding a monotone decrease of the 
objective function at each iteration\Lspace \Lcitemark 2\Rcitemark \Rspace{}; 
or $(ii)$ a nonmonotone line 
search (inspired from\Lspace \Lcitemark 5\Rcitemark \Rspace{} and analyzed
in\Lspace \Lcitemark 3\Rcitemark \Rspace{} and\Lspace \Lcitemark 4\Rcitemark \R
space{} in this context)
may be selected, forcing a decrease of 
the objective function within at most four iterations.
In the monotone line search scheme, the SQP direction is first 
``tilted'' if nonlinear constraints are present
to yield a feasible direction, then possibly ``bent'' to ensure 
that close to a solution the step of one is accepted, 
a requirement for superlinear convergence.
The nonmonotone line search scheme achieves superlinear convergence 
with no bending of the search direction, thus avoiding function 
evaluations at auxiliary points and subsequent solution of 
an additional quadratic program. After turning nonlinear equality
constraints into inequality constraints, these algorithms are
used directly to solve the modified problems. Certain procedures
(see, e.g.,\Lspace \Lcitemark 6\Rcitemark \Rspace{})
are adopted to obtain proper values of $p_j$'s in order to
ensure that a solution of the modified problem is also a solution
of the original problem, as described below.

For the solution of the quadratic programming subproblems, FSQP 3.1
is set up to call QLD\Lspace \Lcitemark 7\Rcitemark \Rspace{} which is provided
 
with the FSQP distribution for the user's convenience.

\section{Description of the Algorithms}
The algorithms described and analyzed 
in\Lspace \Lcitemark 2\Rcitemark \Rspace{},\Lspace \Lcitemark 3\Rcitemark \Rspa
ce{} 
and\Lspace \Lcitemark 4\Rcitemark \Rspace{} are as follows.
Given a feasible iterate $x$, the basic SQP direction
$d^0$ is first computed by solving a standard quadratic program 
using a positive definite estimate $H$ of 
the Hessian of the Lagrangian. 
$d^0$ is a direction of descent for the objective function; it is 
almost feasible in the sense that it is at worst tangent to 
the feasible set if there are nonlinear constraints and it is feasible 
otherwise.

In\Lspace \Lcitemark 2\Rcitemark \Rspace{},
an essentially arbitrary feasible descent direction $d^1=d^{1}(x)$ is 
then computed. Then for a certain 
scalar $\rho =\rho (x)\in [0,1]$, a feasible descent 
direction $d=(1-\rho)d^0+\rho d^1$ is obtained, asymptotically 
close to $d^0.$ Finally a second order 
correction $\tilde d=\tilde{d}(x,d,H)$ is computed, involving
auxiliary function evaluations at $x+d,$ 
and an Armijo type search is performed along the 
arc $x+td+t^2 \tilde d.$
The purpose of $\tilde d$ is to allow a full step of one to be taken
close to a solution, thus allowing superlinear convergence to 
take place. Conditions are given 
in\Lspace \Lcitemark 2\Rcitemark \Rspace{} on 
$d^{1}(\cdot)$, $\rho(\cdot)$ and $\tilde d(\cdot ,\cdot)$ 
that result in a globally convergent, 
locally superlinear convergent algorithm.

The algorithm in\Lspace \Lcitemark 3\Rcitemark \Rspace{} is somewhat
more sophisticated. An essential difference is that while feasibility
is still required, the requirement of decrease of the max objective
value is replaced by the weaker requirement that the max
objective value at the new point be lower than its maximum over the last
four iterates. The main payoff is that the auxiliary function 
evaluations
can be dispensed with, except possibly at the first few iterations.
First a direction $d^1=d^1(x)$ is computed, which is feasible even at
Karush-Kuhn-Tucker points. Then for a certain 
scalar $\rho ^{\ell} =\rho  ^{\ell}(x)\in [0,1],$ 
a ``local'' feasible 
direction $d ^{\ell}=(1-\rho ^{\ell})d^0+\rho  ^{\ell}d^1$ is obtained, 
and at $x+d^{\ell}$ the objective functions are tested 
and feasibility is
checked. If the requirements pointed out above are satisfied, $x+d^\ell$
is accepted as next iterate. This will always be the case close to a
solution. Whenever $x+d^\ell$ is not accepted, a ``global'' 
feasible {\it descent}
direction $d ^g=(1-\rho ^g)d^0+\rho  ^gd^1$ is obtained with
$\rho ^g =\rho  ^g(x)\in [0,\rho ^{\ell}].$ 
A second order correction $\tilde d=\tilde{d}(x,d^g,H)$ is computed
the same way as in\Lspace \Lcitemark 2\Rcitemark \Rspace{},
and a ``nonmonotone'' search is performed along the 
arc $x+td^g+t^2 \tilde d.$ 
Here the purpose of $\tilde d$ 
is to suitably initialize the sequence for the ``four iterate'' rule.
Conditions are given in\Lspace \Lcitemark 3\Rcitemark \Rspace{} on 
$d^{1}(\cdot)$, $\rho ^{\ell}(\cdot)$, $\rho ^g(\cdot)$, 
and $\tilde d(\cdot ,\cdot)$ that result in a 
globally convergent, locally superlinear convergent algorithm.
In\Lspace \Lcitemark 4\Rcitemark \Rspace{}, the algorithm of\Lspace \Lcitemark 
3\Rcitemark \Rspace{} is refined
for the case of unconstrained minimax problems.
The major difference over the algorithm of\Lspace \Lcitemark 3\Rcitemark \Rspac
e{} 
is that there is no need of $d^1$.
As in\Lspace \Lcitemark 3\Rcitemark \Rspace{}, $\tilde d$ is required to initia
lize superlinear
convergence.
 
The FSQP implementation corresponds to a specific choice of
$d^1(\cdot)$, $\rho(\cdot)$, $\tilde{d}(\cdot,\cdot)$, 
$\rho^\ell(\cdot)$, and $\rho^g(\cdot)$,
with some modifications as follows. If the first algorithm
is used, $d^1$ is computed as 
a function not only of $x$ but also of $d^0$~(thus of $H$), as it
appears beneficial to keep $d^1$ relatively close to $d^0$.
In the case of the second algorithm, the construction
of $d^{\ell}$ is modified so that the function
evaluations at different auxiliary points can 
be avoided during early iteration 
when $\rho ^g\neq \rho ^{\ell}$; 
the quadratic program that yields $\tilde{d}$ involves only a 
subset of ``active'' functions, thus decreasing the number
of function evaluations.
The details are given below.
The analysis in\Lspace \Lcitemark 2\Rcitemark \Rspace{},
\Lcitemark 3\Rcitemark \Rspace{} and\Lspace \Lcitemark 4\Rcitemark \Rspace{}
can be easily extended  to these modified algorithms.
Also obvious simplifications are introduced concerning
linear constraints: the iterates are allowed (for inequality constraints)
or are forced (for equality constraints) to stay
on the boundary of these constraints and these constraints
are not checked in the line search. Finally, FSQP automatically switches to
a ``phase 1'' mode if the initial guess provided by 
the user is not in the feasible set.

Below we call FSQP-AL
the algorithm with the Armijo line search, and FSQP-NL the algorithm
with nonmonotone line search. We make use of the notations
$$f_{I^f}(x)=\max\limits _{i\in I^f} \{f_i(x)\}$$
$$f'(x,d,p)=\max\limits_{i\in I^f}\{f_i(x)+
    \langle \nabla f_i(x),d\rangle\} - f_{I^f}(x)
    -\sum\limits_{j=1}^{n_e}p_j\langle\nabla h_j(x),d\rangle$$
and, for any subset $I\subset I^f$,
$$\tilde {f}'_I(x+d,x,\tilde d,p)=\max\limits_{i\in I}\{f_i(x+d)+
    \langle \nabla f_i(x),\tilde d\rangle\} - f_{I}(x+d)
    -\sum\limits_{j=1}^{n_e}p_j\langle\nabla h_j(x),\tilde d\rangle.$$
At each iteration $k$, the quadratic program $QP(x_k,H_k,p_k)$ that yields
the SQP direction $d^0_k$ is defined
at $x_k$ for $H_k$ symmetric positive definite by
\smallskip
$$\begin{array}{ll}
   \min\limits_{d^0\in R^n}~~ &  {1 \over {2}}\langle {d^0},H_k {d^0}
                 \rangle+f'(x_k,d^0,p_k)  \\
  {\rm ~~s.t.} &  bl \leq x_k+d^0 \leq bu \\
              & g_j(x_k)+\langle\nabla g_j(x_k),d^0 \rangle
             \leq 0, ~~~j=1,\ldots , t_i \\
              & h_j(x_k)+\langle\nabla h_j(x_k),d^0 \rangle
             \leq 0, ~~~j=1,\ldots ,n_e \\
              & \langle a_j,x_k + d^0 \rangle=b_j,
             ~~~j=1,\ldots , t_e-n_e. \end{array}$$
Let $\zeta _{k,j}$'s with $\sum_{j=1}^{n_f} \zeta _{k,j} =1$, 
$\xi_{k,j}$'s, $\lambda _{k,j}$'s, and $\mu_{k,j}$'s denote 
the multipliers, for the various objective functions, simple 
bounds (only $n$ possible active bounds at each iteration), inequality,
and equality constraints respectively, associated 
with this quadratic program. 
Define the set of active objective functions, 
for any $i$ such that $\zeta_{k,i}>0$, by
$$
I^f_k(d_k)=\{j\in I^f: |f_j(x_k)-f_i(x_k)|\leq 
0.2\|d_k\|\cdot\|\nabla f_j(x_k)-\nabla f_i(x_k)\|\}
\cup\{j\in I^f:\zeta_{k,j}>0\}
$$
and the set of active constraints by
$$
I^g_k(d_k)\!=\!\{j\!\in\!\{1,\ldots,t_i\}:|g_j(x_k)|\leq
0.2\|d_k\|\cdot\|\nabla g_j(x_k)\|\}
\cup\{j\in\{1,\ldots,t_i\}:\lambda_{k,j}>0\}.
$$

\vspace{1em}
\noindent{\bf Algorithm FSQP-AL.}

\vspace{1em}
\noindent{\it Parameters.} $\eta =0.1$, $\nu=0.01$, $\alpha=0.1$,
$\beta=0.5$, $\kappa = 2.1$, $\tau _1=\tau _2 = 2.5$, $\underline t=0.1$,
$\epsilon_1=1$, $\epsilon_2=10$, $\delta=5$.

\smallskip
\noindent{\it Data.} $x_0\in R^n$, $\epsilon > 0$, $\epsilon_e>0$ and
$p_{0,j}=\epsilon_2$ for $j=1,\ldots,n_e$. 

\smallskip
\noindent{\it Step 0: Initialization.} Set $k=0$ and $H_0=$ the 
identity matrix. Set $nset=0$. If $x_0$ is infeasible for some constraint
other than a nonlinear equality constraint,
substitute a feasible point, obtained as discussed below.
For $j=1,\ldots,n_e$, replace $h_j(x)$ by $-h_j(x)$ whenever
$h_j(x_0)>0$.

\smallskip
\noindent{\it Step 1: Computation of a search arc.}

\begin{itemize}
\item[\it i.]Compute $d_{k}^{0}$, the solution of the quadratic program
$QP(x_k,H_k,p_k)$.  
If $\|d_k^0\|\leq \epsilon$
and $\sum_{j=1}^{n_e}|h_j(x_k)|\leq \epsilon_e$, stop. 
If $n_i+n_e=0$ and $n_f=1,$ set $d_k=d^0_k$ and $\tilde d_k =0$ and 
go to {\it Step~2}. If $n_i+n_e=0$ and $n_f > 1$, set $d_k=d^0_k$ and 
go to {\it Step~1~iv}.

\item[\it ii.]Compute $d_{k}^{1}$ by solving the strictly convex 
quadratic program
\smallskip
$$  \begin{array}{ll} \min\limits_{d^1\in R^n,\gamma \in R}  
                    & \frac{\eta}{2} 
                  \langle d_{k}^{0}-d^1,d_{k}^{0}-d^1 \rangle +\gamma \\
  {\rm ~~~~s.t.} &  bl \leq x_k+d^1 \leq bu\\
   &  f'(x_k,d^1,p_k) \leq \gamma\\
   &  g_j(x_k)+\langle \nabla g_j(x_k),d^1 \rangle
      \leq\gamma, ~~~~j=1,\ldots,n_i\\
   & \langle c_j,x_k  + d^1 \rangle \leq d_j,
      ~~~~j=1,\ldots,t_i-n_i \\
   &  h_j(x_k)+\langle \nabla h_j(x_k),d^1 \rangle
      \leq\gamma, ~~~~j=1,\ldots,n_e\\
   & \langle a_j,x_k  + d^1 \rangle=b_j,
                      ~~~~j=1,\ldots,t_e-n_e\end{array}$$
\smallskip
\item[\it iii.] Set $d_k=(1-\rho_k)d_k^0+\rho_kd_k^1$~
           with $\rho_k=\|d_k^0\|^{\kappa}/(\|d_k^0\|^{\kappa}+v_k)$,~
        where $v_k = \max(0.5,~\|d_k^1\|^{\tau _1}).$

\item[\it iv.] 
Compute $\tilde d_k$ by solving the strictly convex 
quadratic program
\smallskip
$$\begin{array}{ll} \min\limits_{\tilde d \in R^n} &  \frac{1}{2}
                  \langle (d_k+\tilde d),H_{k}(d_k+\tilde d)\rangle 
                 +f'_{I^f_k(d_k)}(x_k,d_k,\tilde d,p_k) \\
  {\rm ~s.t.} &  bl \leq x_k+d_k+\tilde d \leq bu\\
             & g_j(x_k+d_k) +\langle \nabla g_j(x_k),\tilde d\rangle\leq
                    -\min(\nu\|d_k\|,~\|d_k\|^{\tau _2}),~
                    j\in I^g_k(d_k)\cap\{j:j\leq n_i\}\\
%            & \hspace{20em} j\in I^g_k(d_k)\cap\{j:j\leq n_i\}\\
             & \langle c_{j-n_i},x_k+d_k + \tilde d \rangle \leq d_{j-n_i},
                     ~~~~j\in I^g_k(d_k)\cap\{j:j>n_i\}\\
             & h_j(x_k+d_k) +\langle \nabla h_j(x_k),\tilde d\rangle\leq
                    -\min(\nu\|d_k\|,~\|d_k\|^{\tau _2}),~j=1,\ldots,n_e\\
             & \langle a_j,x_k+d_k + \tilde d \rangle=b_j,
                      ~~~~j=1,\ldots,t_e-n_e\end{array}$$
where $f'_{I^f_k(d_k)}(x_k,d_k,\tilde d,p_k)=f'(x_k,d_k+\tilde d,p_k)$ 
if $n_f = 1,$ and
$f'_{I^f_k(d_k)}(x_k,d_k,\tilde d,p_k)=\tilde{f}'_{I^f_k(d_k)}(x_k+d_k,x_k,\til
de d,p_k)$ 
if $n_f > 1$.
If the quadratic program has no solution or 
if $\|\tilde d_k\|>\|d_{k}\|$, set $\tilde d_k=0$.
\end{itemize}

\noindent{\it Step 2. Arc search.} 
Let $\delta _k=f'(x_k,d_k,p_k)$ if $n_i+n_e\ne 0$ 
and $\delta _k=-\langle d_k^0,H_kd_k^0\rangle$ otherwise.
Compute $t_{k}$, the first number $t$ in 
the sequence $\mbox\{1,\beta,\beta^{2},\ldots\}$ satisfying
\begin{eqnarray*}
\textstyle
& f_m(x_{k}+td_{k}+t^{2}\tilde d_{k},p_k)\leq f_m(x_k,p_k)+\alpha t\delta_k & \
\
& g_j(x_k+td_k+t^2\tilde d_k)\leq0,~~j=1,\ldots,n_i & \\
&\langle c_{j-n_i},x_k+td_k + t^2\tilde{d}_k \rangle \leq d_{j-n_i},
                ~~~~\forall j>n_i~\&~j\not\in I^g_k(d_k)\\
&h_j(x_k+td_k+t^2\tilde d_k)\leq0,~~j=1,\ldots,n_e. &
\end{eqnarray*}
Specifically, the line search proceeds as follows.
First, the linear constraints that were not used
in computing $\tilde{d}_k$ are checked until all of them are
satisfied, resulting in a stepsize, say, $\bar{t}_k$. Due to 
the convexity of linear constraints, these constraints
will be satisfied for any $t\leq \bar{t}_k$. Then, for $t=\bar{t}_k$,
nonlinear constraints are checked first and,
for both objectives and constraints, those with nonzero 
multipliers in the QP yielding $d^0_k$ are evaluated first.
For $t<\bar{t}_k$, the function that caused the previous value of $t$ to
be rejected is checked first; all functions of the same type
(``objective'' or ``constraint'') as the latter
will then be checked first.

\smallskip
\smallskip
\noindent{\it Step 3. Updates.} 
\begin{itemize}
\item[$\cdot$] If $nset>5n$ and $t_k<\underline t$, set $H_{k+1}=H_0$ 
and $nset=0$.
Otherwise, set $nset=nset+1$ and compute a new approximation $H_{k+1}$ 
to the Hessian of the Lagrangian using the BFGS formula with Powell's 
modification\Lspace \Lcitemark 8\Rcitemark \Rspace{}.
\item[$\cdot$] Set $x_{k+1}=x_{k}+t_{k}d_{k}+t_{k}^{2}\tilde d_{k}$.
\item[$\cdot$] Solve the unconstrained 
quadratic problem in $\bar{\mu}$
$$\begin{array}{cl}
\min\limits_{\bar{\mu}\in R^{t_e}} &
\|\sum\limits_{j=1}^{n_f}\zeta _{k,j}\nabla f_j(x_k)+
\xi_k+\sum\limits_{j=1}^{t_i}\lambda_{k,j}\nabla g_j(x_k)
  +\sum\limits_{j=1}^{t_e}\bar{\mu}_j\nabla h_j(x_k)\|^2,
\end{array}$$
where the $\zeta_{k,j}$'s, $\xi_k$ and the $\lambda_{k,j}$'s
are the multipliers associated with $QP(x_k,H_k,p_k)$ for the objective
functions, variable bounds, and inequality constraints 
respectively.\footnote{This is a refinement (saving much computation
and memory) of the scheme proposed in\Lspace \Lcitemark 1\Rcitemark \Rspace{}.}
Update $p_k$ as follows: for $j=1,\ldots,n_e$,
$$p_{k+1,j}=\left\{\begin{array}{ll}
p_{k,j} & \mbox{if } p_{k,j}+\bar\mu_j \geq \epsilon_1\\
\max\{\epsilon_1-\bar\mu_j,~\delta p_{k,j}\} & \mbox{otherwise.}
\end{array}\right.$$
\item[$\cdot$] Increase $k$ by 1.
\item[$\cdot$] Go back to {\it Step 1}.
\end{itemize}

\hfill{\large \bf $\Box$}
 
\vspace{1em}
\noindent{\bf Algorithm FSQP-NL.}

\vspace{1em}
\noindent{\it Parameters.} $\eta =3.0$, $\nu=0.01$,
$\alpha=0.1$, $\beta=0.5$, $\theta=0.2$, $\bar{\rho}=0.5$, $\gamma = 2.5$,
$\underline{C}=0.01$, $\underline{d}=5.0$, $\underline t=0.1$,
$\epsilon_1=0.1$, $\epsilon_2=10$, $\delta=5$.

\smallskip
\noindent{\it Data.} $x_0\in R^n$, $\epsilon > 0$, $\epsilon_e>0$ and
$p_{0,j}=\epsilon_2$ for $j=1, \ldots, n_e$. 

\smallskip
\noindent{\it Step 0: Initialization.} Set $k=0$, $H_0=$ the identity 
matrix, and $C_0 = \underline{C}.$ If $x_0$ is infeasible for 
constraints other than nonlinear equality constraints, substitute a
feasible point, obtained as discussed below. 
Set $x_{-3}=x_{-2}=x_{-1}=x_0$ and $nset=0$.
For $j=1,\ldots,n_e$, replace $h_j(x)$ by $-h_j(x)$ whenever
$h_j(x_0)>0$.

\smallskip
\noindent{\it Step 1: Computation of a new iterate.}

\begin{itemize}
\item[\it ~~~i.] Compute $d_{k}^{0}$, the solution of quadratic program
$QP(x_k,H_k,p_k)$.
%Compute the Kuhn-Tucker vector 
%$$\begin{array}{lll}
%\nabla L(x_k,\zeta_k,\xi_k,\lambda_k,\mu_k,p_k)& = &
%\sum\limits_{j=1}^{n_f} \zeta _{k,j}\nabla f_j(x_k)+
%\sum\limits_{j=1}^{n} \xi _{k,j}+\sum\limits_{j=1}^{t_i} 
%  \lambda _{k,j}\nabla g_j(x_k) \\
%& & ~~~+\sum\limits_{j=1}^{n_e}(\mu_{k,j}-p_{k,j})\nabla h_j(x_k)
%    +\sum\limits_{j=n_e+1}^{t_e}\mu_{k,j}\nabla h_j(x_k).\end{array}$$

%If $\|\nabla L(x_k,\zeta_k,\xi_k,\lambda_k,\mu_k,p_k)\|\leq \epsilon$
If $\|d_k^0\|\leq \epsilon$
and $\sum_{j=1}^{n_e}|h_j(x_k)|\leq\epsilon_e$, stop. 
If $n_i+n_e=0$ and $n_f=1,$ set $d_k=d^0_k$ and $\tilde d_k =0$ and 
go to {\it Step~1~viii}. If $n_i+n_e=0$ and $n_f >1,$ 
set $\rho _k^{\ell}=\rho _k^g=0$ and go to {\it Step~1~v}.

\item[\it ~~ii.] Compute $d_{k}^{1}$ by solving the strictly convex 
quadratic program
\smallskip
$$  \begin{array}{ll} \min\limits_{d^1\in R^n,\gamma \in R}  
                                 & \frac{\eta}{2}\|d^1\|^2+\gamma \\
  {\rm ~~~~s.t.} &  bl \leq x_k+d^1 \leq bu\\
      &  g_j(x_k)+\langle \nabla g_j(x_k),d^1 \rangle
       \leq\gamma, ~~~~j=1,\ldots,n_i\\
      & \langle c_j,x_k  + d^1 \rangle \leq d_j,
            ~~~~j=1,\ldots,t_i-n_i \\
      &  h_j(x_k)+\langle \nabla h_j(x_k),d^1 \rangle
       \leq\gamma, ~~~~j=1,\ldots,n_e\\
      & \langle a_j,x_k  + d^1 \rangle=b_j,
                      ~~~~j=1,\ldots,t_e-n_e\end{array}$$

\item[\it ~iii.] Set $v_{k}=\min \{C_k\|d^0_k\|^2,\|d^0_k\|\}$. 
Define values
$\rho^g_{k,j}$ for $j=1,\ldots,n_i$ by $\rho^g_{k,j}$ equal to zero if
\smallskip
$$g_j(x_k)+\langle \nabla g_j(x_k),d^0_k\rangle \leq -v_k$$
\smallskip
or equal to the maximum $\rho$ in $[0,1]$ such that
\smallskip
$$g_j(x_k)+\langle \nabla g_j(x_k),(1-\rho)d^0_k+
            \rho d^1_k\rangle \geq -v_k$$
\smallskip
otherwise. Similarly, define values $\rho^h_{k,j}$ for $j=1,\ldots,n_e$.
Let $$\rho ^{\ell}_k=\max\left\{\max _{j=1,\ldots,n_i}\{\rho^g_{k,j}\},~
\max _{j=1,\ldots,n_e}\{\rho^h_{k,j}\}\right\}.$$

\item[\it ~~iv.] Define $\rho _k^g$ as the largest number $\rho$
in $[0,\rho ^{\ell}_k]$ such that
\smallskip
$$f'(x_k,(1-\rho)d^0_k+\rho d^1_k,p_k)\leq \theta f'(x_k,d^0_k,p_k).$$
If ($k\geq 1$ \& $t_{k-1}<1$) or ($\rho _k^{\ell} > \bar{\rho}$), set 
$\rho _k^\ell = \min \{\rho _k^\ell, \rho _k^g\}.$

\item[\it ~~~v.] Construct a ``local'' direction
\smallskip
$$d_k^{\ell}=(1-\rho _k^{\ell})d^0_k+\rho _k^{\ell} d^1_k.$$
Set $M=3$, $\delta_k=f'(x_k,d_k^0)$ if $n_i+n_e\ne 0$, 
and $M=2$, $\delta_k=-\langle d_k^0,H_kd_k^0\rangle$ otherwise.
If
$$f_m(x_k+d^{\ell}_k,p_k)\leq 
\max\limits_{\ell=0,\ldots,M}\{f_m(x_{k-\ell},p_k)\} +
       \alpha \delta_k$$
$$g_j(x_k+d^{\ell}_k)\leq 0,~~j=1,\ldots,n_i$$
and
$$h_j(x_k+d^{\ell}_k)\leq 0,~~j=1,\ldots,n_e,$$
\smallskip
set $t_k=1$, $x_{k+1}=x_k+d_k^{\ell}$ and go to {\it Step 2}.

\item[\it ~~vi.] Construct a ``global'' direction
\smallskip
$$d_k^{g}=(1-\rho _k^{g})d^0_k+\rho _k^{g}d^1_k.$$

\item[\it ~vii.] 
Compute $\tilde d_{k}$ by solving the strictly convex 
quadratic program
\smallskip
$$  \begin{array}{cl} \min\limits_{\tilde d \in R^n} &  \frac{1}{2}
                  \langle (d_k^g+\tilde d),H_{k}(d^g_k+\tilde d)\rangle 
                 +f'_{I^f_k(d_k^g)}(x_k,d_k^g,\tilde d,p_k) \\
  \mbox{s.t.} &  bl \leq x_k+d_k^g+\tilde d \leq bu\\
       & g_j(x_k+d_k^g) +\langle \nabla g_j(x_k),\tilde d\rangle\leq
         -\min(\nu\|d_k^g\|,~\|d_k^g\|^{\tau}),
              ~~~j\in I^g_k(d^g_k)\cap\{j:j\leq n_i\}\\
       & \langle c_{j-n_i},x_k+d_k^g + \tilde d \rangle \leq d_{j-n_i},
              ~~~~j\in I^g_k(d^g_k)\cap\{j:j>n_i\}\\
       & h_j(x_k+d_k^g) +\langle \nabla h_j(x_k),\tilde d\rangle\leq
         -\min(\nu\|d_k^g\|,~\|d_k^g\|^{\tau}),
              ~~~j=1,\ldots,n_e\\
       & \langle a_j,x_k+d_k^g + \tilde d \rangle=b_j,
              ~~~~j=1,\ldots,t_e-n_e\end{array}$$
where $f'_{I^f_k(d_k^g)}(x_k,d_k^g,\tilde d,p_k)=f'(x_k,d_k^g+\tilde d,p_k)$ if
 $n_f=1,$
and $f'_{I^f_k(d_k^g)}(x_k,d_k^g,\tilde d,p_k)=
\tilde{f}'_{I^f_k(d_k^g)}(x_k+d_k^g,x_k,\tilde d,p_k)$ 
if $n_f>1$. If the quadratic program has no solution or 
if $\|\tilde d_k\|>\|d_k^g\|$, set $\tilde d_k=0$.

\item[\it viii.] Set $M=3$, $\delta_k=f'(x_k,d^g_k,p_k)$ if $n_i+n_e\ne 0$,
and $M=2$, $\delta_k=-\langle d^g_k,H_kd^g_k\rangle$ otherwise.
Compute $t_k$, the first number $t$ in 
the sequence $\mbox\{1,\beta,\beta^{2},\ldots\}$ satisfying
\smallskip
\begin{eqnarray*}
\textstyle
& f_m(x_{k}+td^g_k+t^{2}\tilde d_k,p_k)\leq 
 \max\limits_{\ell=0,\ldots,M}\{f_m(x_{k-\ell},p_k)\}+
\alpha t \delta_k &\\
& g_{j}(x_{k}+td_k^g+t^{2}\tilde d_{k})\leq0,~~j=1,\ldots,n_i & \\
&\langle c_{j-n_i},x_k+td_k^g +t^2 \tilde{d}_k \rangle \leq d_{j-n_i},
              ~~~~j>n_i~\&~j\not\in I^g_k(d^g_k) &\\
& h_{j}(x_{k}+td_k^g+t^{2}\tilde d_{k})\leq0,~~j=1,\ldots,n_e &
\end{eqnarray*}
and set $x_{k+1}=x_k+t_kd_k^g+t_k^2\tilde d_k.$ \\
Specifically, the line search proceeds as follows.
First, the linear constraints that were not used
in computing $\tilde{d}_k$ are checked until all of them are
satisfied, resulting in a stepsize, say, $\bar{t}_k$. Due to 
the convexity of linear constraints, these constraints
will be satisfied for any $t\leq \bar{t}_k$. Then, for $t=\bar{t}_k$,
nonlinear constraints are checked first and,
for both objectives and constraints, those with nonzero 
multipliers in the QP yielding $d^0_k$ are evaluated first.
For $t<\bar{t}_k$, the function that caused the previous value of $t$ to
be rejected is checked first; all functions of the same type
(``objective'' or ``constraint'') as the latter
will then be checked first.
\end{itemize}

\noindent{\it Step 2. Updates.} 
\begin{itemize}
\item[$\cdot$] If $nset>5n$ and $t_k<\underline t$, set $H_{k+1}=H_0$
and $nset=0$. Otherwise, set $nset=nset+1$ and 
compute a new approximation $H_{k+1}$ 
to the Hessian of the Lagrangian using the BFGS formula with Powell's 
modification\Lcitemark 8\Rcitemark . 
\item[$\cdot$] If $\|d^0_k\|>\underline{d}$, 
set $C_{k+1}=\max \{0.5C_k,\underline{C}\}.$
Otherwise, if $g_j(x_k+d_k^\ell) \leq 0,~~j=1,\ldots,n_i$, 
set $C_{k+1}=C_k$. Otherwise, set $C_{k+1}=10C_k$.
\item[$\cdot$] Solve the unconstrained 
quadratic problem in $\bar{\mu}$
$$\begin{array}{cl}
\min\limits_{\bar{\mu}\in R^{t_e}} &
\|\sum\limits_{j=1}^{n_f}\zeta _{k,j}\nabla f_j(x_k)+
\xi_k+\sum\limits_{j=1}^{t_i}\lambda_{k,j}\nabla g_j(x_k)
  +\sum\limits_{j=1}^{t_e}\bar{\mu}_j\nabla h_j(x_k)\|^2,
\end{array}$$
where the $\zeta_{k,j}$'s, $\xi_k$ and the $\lambda_{k,j}$'s
are the multipliers associated with $QP(x_k,H_k,p_k)$ for the objective
functions, variable bounds, and inequality constraints 
respectively.\footnote{See footnote to corresponding step in description
of FSQP-AL.} 

Update $p_k$ as follows: for $j=1,\ldots,n_e$,
$$p_{k+1,j}=\left\{\begin{array}{ll}
p_{k,j} & \mbox{if } p_{k,j}+\bar\mu_j \geq \epsilon_1\\
\max\{\epsilon_1-\bar\mu_j,~\delta p_{k,j}\} & \mbox{otherwise.}
\end{array}\right.$$
\item[$\cdot$] Increase $k$ by 1.
\item[$\cdot$] Go back to {\it Step 1}.
\end{itemize}

\hfill{\large \bf $\Box$}

\noindent{\bf Remark:} The Hessian matrix is reset
in both algorithms whenever stepsize is too small and
the updating of the matrix goes through $n$ iterations.
This is helpful in some situations where the Hessian matrix
becomes singular.
 
\vspace{1em}
If the initial guess $x_0$ provided by the user is not feasible
for some inequality constraint or some linear equality constraint,
FSQP first solves a strictly convex quadratic program
\smallskip
$$\begin{array}{cl}
           \min\limits_{v\in R^n} &  \langle v,v\rangle \\
            \mbox{s.t.} &   bl \leq x_0+v \leq bu\\
                  &   \langle c_j,x_0 + v \rangle \leq d_j,
                        ~~~j=1,\ldots,t_i-n_i\\
                  &   \langle a_j,x_0 + v \rangle=b_j,
                      ~~~j=1,\ldots,t_e-n_e.  \end{array}$$

\vspace{.5em}
\noindent{}Then, starting from the point $x=x_0+v$, it will iterate, 
using algorithm FSQP-AL, on the problem
\smallskip
$$\begin{array}{cl}
    \min\limits_{x\in R^n} &  \max\limits_{j=1,\ldots,n_i}\{g_j(x)\} \\
    \mbox{s.t.}  & ~~bl \leq x \leq bu\\
         & ~~\langle c_j,x\rangle \leq d_j,~~~j=1,\ldots,t_i-n_i\\
         & ~~\langle a_j,x \rangle =b_j,~~~j=1,\ldots , t_e-n_e 
            \end{array}$$
until $\max\limits_{j=1,\ldots,n_i}\{g_j(x)\} \leq 0$ is achieved.
The corresponding iterate $x$ will then be feasible 
for all constraints other than nonlinear equality constraints of 
the original problem. 

\section{Specification of Subroutine FSQPD 3.1}
Only a double precision version of FSQP, FSQPD is currently available.
The specification of FSQPD is as follows:
\vspace{1em}
\begin{quote}
\begin{verbatim}
  subroutine FSQPD(nparam,nf,nineqn,nineq,neqn,neq,mode,iprint,miter,
 *                 inform,bigbnd,eps,epseqn,udelta,bl,bu,x,f,g,
 *                 iw,iwsize,w,nwsize,obj,constr,gradob,gradcn)
  integer nparam,nf,nineqn,nineq,neqn,neq,mode,iprint,miter,inform,
 *        iwsize,nwsize
  integer iw(iwsize)
  double  precision bigbnd,eps,epseqn,udelta
  double  precision bl(nparam),bu(nparam),x(nparam),
 *        f(nf),g(nineq+neq),w(nwsize)
  external obj,constr,gradob,gradcn
\end{verbatim}
\end{quote}
\vspace{1em}
{\bf Important:} all real variables and arrays must be declared as 
double precision in the routine that calls FSQPD. The following are 
specifications of parameters and workspace.

\vspace{1em}
\begin{description}
\item[\tt nparam] {\bf (Input)}~Number of free variables,
                  i.e., the dimension of {\tt x}.
\item[\tt nf]   {\bf (Input)}~Number of objective 
                   functions ($n_f$ in the algorithm description).
\item[\tt nineqn]    {\bf (Input)}~Number (possibly zero) of 
                       nonlinear inequality constraints ($n_i$ in the
                      algorithm description). 
\item[\tt nineq]  {\bf (Input)}~Total number (possibly equal 
                       to {\tt nineqn}) of 
                       inequality constraints ($t_i$ in the algorithm 
                  description).
\item[\tt neqn]    {\bf (Input)}~Number (possibly zero) of 
                       nonlinear equality constraints ($n_e$ in the
                      algorithm description). 
\item[\tt neq]    {\bf (Input)}~Total number (possibly equal to {\tt neqn}) of 
                  equality constraints ($t_e$ in the algorithm 
                  description).
\item[\tt mode]   {\bf (Input)}~${\tt mode} = 1BA$ with the following 
                  meanings:
                  \begin{quote}
                  \begin{quote}
                  \begin{quote}
                  \begin{itemize}
                  \item[${\tt A} = 0$~:~~] $(P)$ is to be solved.
                  \item[${\tt A} = 1$~:~~] $(PL_\infty)$ is to be solved. 
                  $(PL_\infty)$ is defined as follows
$$
  (PL_\infty)~~~~~ \min ~ \max\limits_{i\in I^f} |f_i(x)| 
                             \mbox{~~~s.t.~~}x\in X
$$
                  where $X$ is the same as for $(P).$ It is handled
                  in this code by splitting $|f_i(x)|$ as $f_i(x)$
                  and $-f_i(x)$ for each $i.$ The user is required
                  to provide only $f_i(x)$ for $i\in I^f$.
                  \item[${\tt B} = 0$~:~~]Algorithm FSQP-AL is 
                                           selected, resulting in a 
                                           decrease of the (modified) objective
 
                                           function at each iteration.
                  \item[${\tt B} = 1$~:~~]Algorithm FSQP-NL is 
                                           selected, resulting in a 
                                           decrease of the (modified) objective
                                           function within at 
                                           most four iterations (or three
                                           iterations, see Algorithm FSQP-NL).
                  \end{itemize}
                  \end{quote}
                  \end{quote}
                  \end{quote}
\item[\tt iprint] {\bf (Input)}~Parameter indicating the 
                  desired output (see \S 4 for details):
                  \begin{quote}
                  \begin{quote}
                  \begin{quote}
                  \begin{itemize}
                  \item[~~${\tt iprint} =0$~:~~] No information except 
                                for user-input errors is displayed. This value
                                is imposed during phase 1.
                  \item[~~${\tt iprint} =1$~:~~] 
                                Objective and constraint values 
                                at the initial feasible point are displayed.
                                At the end of execution, status ({\tt inform}),
                                iterate, objective values, constraint values,
                                number of evaluations of objectives and 
                                nonlinear constraints, norm of the Kuhn-Tucker 
                                vector, and sum of feasibility violation
                                are displayed.
                  \item[~~${\tt iprint} =2$~:~~] At the end of each 
                                iteration, the same information as with
                                ${\tt iprint}=1$ is displayed.
                  \item[~~${\tt iprint} =3$~:~~] At each iteration, 
                                the same information as with ${\tt iprint}=2$, 
                                including detailed information on the search 
                                direction computation, on the line search,
                                and on the update is displayed.
                  \end{itemize}
                  \end{quote}
                  \end{quote}
                  \end{quote}
\item[\tt miter] {\bf (Input)}~Maximum number of iterations
allowed by the user before termination of execution.
\item[\tt inform] {\bf (Output)}~Parameter indicating the status of
                   the execution of FSQPD:
                   \begin{quote}
                   \begin{quote}
                   \begin{quote}
                   \begin{itemize}
                  \item[~~${\tt inform} = 0$~:~] Normal termination of 
                               execution in the sense that 
                         $\|d^0\|\leq {\tt eps}$
                         and (if ${\tt neqn} \ne 0$) 
                         $\sum_{j=1}^{n_e}|h_j(x)|\leq {\tt epseqn}$.
                  \item[~~${\tt inform} = 1$~:~] The user-provided 
                                                 initial guess
                                                 is infeasible for
                                                 linear constraints and 
                                                 FSQPD is unable to 
                                                 generate a point
                                                 satisfying all these 
                                                 constraints.
                  \item[~~${\tt inform} = 2$~:~] The user-provided 
                                                 initial guess
                                                 is infeasible for nonlinear 
                                                 inequality constraints and
                                                 linear constraints; and 
                                                 FSQPD is unable to 
                                                 generate a point
                                                 satisfying all these
                                                 constraints.
                  \item[~~${\tt inform} = 3$~:~] The maximum 
                                               number~{\tt miter} 
                                               of iterations has been 
                                               reached before a 
                                               solution is obtained.
                  \item[~~${\tt inform} = 4$~:~] The line search fails 
                                               to find a new 
                                               iterate (trial step size 
                                                being 
                                            smaller than the machine 
                                            precision 
                                        {\tt epsmac} computed by FSQPD).
                  \item[~~${\tt inform} = 5$~:~] Failure of the QP solver
                                                 in attempting 
                                                to construct $d^0$. A more
                                                robust QP solver may succeed.
                  \item[~~${\tt inform} = 6$~:~] Failure of the QP solver
                                                 in attempting 
                                               to construct $d^1$. A more
                                                robust QP solver may succeed.
                  \item[~~${\tt inform} = 7$~:~] Input data are not 
                                                consistent~(with 
                                                 printout
                                                indicating the error).
                   \end{itemize}
                   \end{quote}
                   \end{quote}
                   \end{quote}
\item[\tt bigbnd]  {\bf (Input)}~(see also {\tt bl} 
                  and {\tt bu} below)~It plays the role of 
                  Infinite Bound.
\item[\tt eps]    {\bf (Input)}~Final norm requirement for 
%                  the Kuhn-Tucker vector ($\epsilon$ in the 
                  the Newton direction $d_k^0$ ($\epsilon$ in the 
                  algorithm description). It must be bigger 
                  than the machine
                  precision {\tt epsmac} (computed by FSQPD).
                  (If the user does not have a good feeling of
                  what value should be chosen, a very small
                  number could be provided and $\mbox{\tt iprint}=2$
                  be selected so that the user would be able to keep track of 
                  the process of optimization and terminate FSQPD
                  at appropriate time.)
\item[\tt epseqn] {\bf (Input)}~Maximum violation of nonlinear equality
                  constraints allowed by the user at an optimal point
                  ($\epsilon_e$ in the algorithm description). 
                  It is in effect only if $n_e\ne 0$ and
                  must be bigger than the machine
                  precision {\tt epsmac} (computed by FSQPD). 
\item[\tt udelta]  {\bf (Input)}~The perturbation  
                  size the user suggests to use in 
                  approximating gradients by finite difference.
                  The perturbation size actually used is defined by 
$\mbox{sign}(x^i)\times\max \{{\tt udelta},~
                  {\tt rteps}\times \max (1,\,|x^i|)\}$~
                  for each component $x^i$ of $x$ ({\tt rteps} 
                  is the square root of {\tt epsmac}). {\tt udelta}
                  should be set to zero if the user has no idea
                  how to choose it.
\item[\tt bl]     {\bf (Input)}~Array of 
                  dimension {\tt nparam} containing
                  lower bounds for the components of {\tt x}. 
                  To specify a non-existent lower 
                  bound (i.e., ${\tt bl}(j)=-\infty$ for 
                  some $j$), the value used must 
                  satisfy ${\tt bl}(j)\leq -{\tt bigbnd}$.
\item[\tt bu]     {\bf (Input)}~Array of 
                  dimension {\tt nparam} containing
                  upper bounds for the components of {\tt x}. 
                  To specify a non-existent upper 
                  bound (i.e., ${\tt bu}(j)=\infty$ for 
                  some $j$), the value used must 
                  satisfy ${\tt bu}(j)\geq {\tt bigbnd}$.
\item[\tt x]      {\bf (Input)}~Initial guess.\\
                  {\bf (Output)}~Iterate at the end of execution. 
\item[\tt f]      Array of dimension $\max\{1, {\tt nf}\}$.\\
                  {\bf (Output)}~Value of functions 
                  $f_i,i=1,\ldots,n_f$, at {\tt x} at the end of 
                  execution.
\item[\tt g]      Array of dimension $\max\{1,{\tt nineq}+{\tt neq}\}$.\\
                  {\bf (Output)}~Values of constraints at {\tt x} at 
                   the end of execution.
\item[\tt iw]  Workspace vector of dimension {\tt iwsize}.
\item[\tt iwsize]  {\bf (Input)}~Workspace length 
                for {\tt iw}. It must be at least as big as
       $6\times {\tt nparam}+8\times ({\tt nineq}+{\tt neq})
       +7\times{\tt nf}+30$. This estimate is usually very conservative
       and the smallest suitable value will be
       displayed if the user-supplied value is too small.
\item[\tt w]      {\bf (Input)}~Workspace of dimension {\tt nwsize}. \\
                  {\bf (Output)}~Estimate of Lagrange multipliers at 
                  the end of execution of phase 2 in the 
                  first ${\tt nparam}+{\tt nineq+neq+nff}$ entries;
            where ${\tt nff}=0$ if (in {\tt mode}) ${\tt A}=0$ and
          ${\tt nf}=1$, and ${\tt nff}={\tt nf}$ otherwise.
       They are ordered as $\xi$'s (variables), $\lambda$'s (inequality
      constraints), $\mu$'s (equality constraints), and $\zeta$ 
      (objective functions).
        $\lambda _j \geq 0~~\forall j=1,\ldots,t_i$
        and $\mu _j \ge 0~~\forall j=1,\ldots,t_e.$ $\xi _i > 0$
        indicates that $x_i$ reaches its upper bound and $\xi _i <0$
        indicates that $x_i$ reaches its lower bound. When
        (in {\tt mode}) ${\tt A}=0$ and ${\tt nf}>1$, $\zeta _i \geq0.$
        When ${\tt B}=1$, $\zeta _i >0$ refers to
        $+f_i(x)$ and $\zeta _i<0$ to $-f_i(x)$.
\item[\tt nwsize] {\bf (Input)}~Workspace length for {\tt w}. 
                   It must be at least as big as
                    $4\times {\tt nparam}^{2}+
                    5\times ({\tt nineq}+{\tt neq})\times{\tt nparam}+
                    3\times{\tt nf}\times{\tt nparam}+
         26\times ({\tt nparam}+{\tt nf})+45\times ({\tt nineq}+{\tt neq})+100$
.  This estimate
         is usually very conservative and the
         smallest suitable value will be 
         displayed if the user-supplied value is too small.
\item[\tt obj]   {\bf (Input)}~Name of the user-defined subroutine
                 that computes the value of the objective 
        functions $f_i(x),~~\forall i=1,\ldots,n_f.$ This name must
        be declared as {\bf external} in the calling routine 
        and passed as an argument to FSQPD.
        The detailed specification is given in \S 5.1 below.
\item[\tt constr]   {\bf (Input)}~Name of the user-defined subroutine
        that computes the value of the constraints. This name must
        be declared as {\bf external} in the calling routine 
        and passed as an argument to FSQPD.
        The detailed specification is given in \S 5.2 below. 
\item[\tt gradob]   {\bf (Input)}~Name of the subroutine that
        computes the gradients of the objective 
        functions $f_i(x),~~\forall i=1,\ldots,n_f.$ This name must
        be declared as {\bf external} in the calling routine 
        and passed as an argument to FSQPD.
        The user must pass the subroutine name 
        {\tt grobfd}~(and declare it as {\bf external}), 
        if he/she wishes that FSQPD evaluate
        these gradients automatically, by forward finite differences.
        The detailed specification is given in \S 5.3 below.
\item[\tt gradcn]   {\bf (Input)}~Name of the subroutine that
          computes the gradients of the constraints. 
          This name must be declared as {\bf external} in the calling 
          routine and passed as an argument to FSQPD.
          The user must pass the subroutine name {\tt grcnfd}~(and
          declare it as {\bf external}), if he/she wishes that 
          FSQPD evaluate these gradients automatically, 
          by forward finite differences.
          The detailed specification is given in \S 5.4 below.
\end{description}

\section{User-Accessible Stopping Criterion}
As is clear from the two algorithms, the optimization process
normally terminates if both 
$\|d_k^0\|\leq\epsilon$
and $\sum_{j=1}^{n_e}|h_j(x_k)|\leq\epsilon_e$ are satisfied. 
Very small value of either of these two parameters may request
exceedingly long execution time, depending on the complexity
of underlying problem and the nonlinearity of various functions. 
FSQP allows users to specify their own stopping criterion in any one of 
the four user-supplied subroutines mentioned above via the following
common block
\begin{verbatim}
            integer nstop
            common /fsqpst/nstop
\end{verbatim}
if (s)he wishes to.
${\tt nstop}=0$ should be returned to FSQP when the stopping criterion 
is satisfied.  FSQP will check the value of {\tt nstop} at appropriate places 
during the optimization process and will terminate when
either the user's criterion or the default criterion is satisfied.

\section{Description of the Output}
No output will be displayed before a feasible starting 
point is obtained. The following information is displayed 
at the end of execution when 
${\tt iprint} = 1$ or at each iteration when ${\tt iprint}=2$:
\begin{description}
\item[\tt iteration]  Total number of iterations (${\tt iprint}=1$) or
                   iteration number (${\tt iprint}=2$).
\item[\tt inform]  See \S 3. It is displayed only
                   at the end of execution.
\item[\tt x]       Iterate.
\item[\tt objectives]  Value of objective functions $f_i(x),~~\forall 
                    i=1,\ldots,n_f$ at {\tt x}. 
\item[\tt objmax]  (displayed only if $\mbox{\tt nf} > 1$)~The 
                   maximum value of the set of objective 
        functions (i.e., $\max f_i(x) \mbox{ or } \max |f_i(x)|,~~
        \forall i=1,\ldots,n_f$) at {\tt x}.
\item[\tt objective max4]  (displayed only if $\mbox{\tt B} = 1$ 
                   in {\tt mode})~Largest value of
                   the maximum of the objective functions over the 
                   last four (or three, see FSQP-NL) 
                   iterations (including the current one).
\item[\tt constraints] Values of the constraints at {\tt x}.
\item[\tt ncallf]  Number of evaluations (so far) of 
                   individual~(scalar) objective function $f_i(x)$ 
                   for $1\leq i \leq n_f.$
\item[\tt ncallg]  Number of evaluations (so far) of 
                   individual~(scalar) nonlinear constraints.
\item[\tt d0norm]  Norm of the Newton direction $d_k^0$.
\item[\tt ktnorm]  Norm of the Kuhn-Tucker vector at the current 
                   iteration. The Kuhn-Tucker vector is given by
$$\begin{array}{lll}
\nabla L(x_k,\zeta_k,\xi_k,\lambda_k,\mu_k,p_k)& = &
\sum\limits_{j=1}^{n_f} \zeta _{k,j}\nabla f_j(x_k)+
\xi_k+\sum\limits_{j=1}^{t_i}\lambda _{k,j}\nabla g_j(x_k) \\
& &~+\sum\limits_{j=1}^{n_e}(\mu_{k,j}-p_{k,j})\nabla h_j(x_k)
    +\sum\limits_{j=n_e+1}^{t_e}\mu_{k,j}\nabla h_j(x_k).\end{array}$$
\item[\tt SCV]     Sum of the violation of nonlinear equality constraints
at a solution.
\end{description}

{\noindent}For ${\tt iprint}=3$, in addition to the same 
         information as the one for ${\tt iprint}=2$, 
         the following is printed at every iteration.

\vspace{1em}
Details in the computation of a search direction:
\begin{description}
\item[\tt d0]      Quasi-Newton direction $d^0_k$.
\item[\tt d1]      First order direction $d^1_k$.
\item[\tt d1norm]  Norm of $d^1_k$.
\item[\tt d]       (${\tt B}=0$ in {\tt mode})~Feasible descent 
                   direction $d_k=(1-\rho _k)d^0_k+\rho _k d^1_k$.
\item[\tt dnorm]   (${\tt B}=0$ in {\tt mode})~Norm of $d_k$.
\item[\tt rho]     (${\tt B}=0$ in {\tt mode})~Coefficient $\rho_k$ in 
                   constructing $d_k$.
\item[\tt dl]      (${\tt B}=1$ in {\tt mode})~Local direction 
                    $d^\ell_k=(1-\rho^\ell_k)d_k^0+\rho^\ell_kd^1_k$.
\item[\tt dlnorm]  (${\tt B}=1$ in {\tt mode})~Norm of $d_k^\ell$.
\item[\tt rhol]    (${\tt B}=1$ in {\tt mode})~Coefficient $\rho_k^{\ell}$ in
                   constructing $d_k^{\ell}$.
\item[\tt dg]      (${\tt B}=1$ in {\tt mode})~Global search direction 
                    $d^g=(1-\rho^g_k)d_k^0+\rho^g_kd^1_k$.
\item[\tt dgnorm]  (${\tt B}=1$ in {\tt mode})~Norm of $d_k^g$.
\item[\tt rhog]    (${\tt B}=1$ in {\tt mode})~Coefficient $\rho_k^g$ in 
                   constructing $d_k^g$.
\item[\tt dtilde]  Second order correction $\tilde d_k$.
\item[\tt dtnorm]  Norm of $\tilde d_k$.
\end{description}

Details in the line search:
\begin{description}
\item[\tt trial step]  Trial steplength $t$ in the search direction.
\item[\tt trial point] Trial iterate along the search arc 
                       with {\tt trial step}.
\item[\tt trial objectives] This gives the indices $i$ and 
                            the corresponding
                            values of the functions 
                   $f_i(x)-\sum_{j=1}^{n_e}p_jh_j(x)$
                   for $1\leq i \leq n_f$ up to the one which fails 
                   in line search at the {\tt trial point}. The 
                   indices $i$
                   are not necessarily in the natural order (see
                   remark at the end of {\it Step 2} in FSQP-AL and of
                   the end of {\it Step~1~viii}\ in FSQP-NL).
\item[\tt trial constraints] This gives the indices $j$ and the 
                   corresponding values of nonlinear constraints 
                   for $1\leq j \leq n_i+n_e$ up to the 
                   one which is not feasible at the {\tt trial point}.
                   The indices $j$
                   are not necessarily in the natural order (see
                   remark at the end of {\it Step 2} in FSQP-AL and of
                   the end of {\it Step~1~viii}\ in FSQP-NL).
\end{description}

Details in the updates:
\begin{description}
\item[\tt delta]  Perturbation size for each variable 
                  in finite difference gradients computation.
\item[\tt gradf]  Gradients of 
                  functions $f_i(x),~\forall i=1,\ldots,n_f,$ 
                  at the new iterate.
\item[\tt gradg]  Gradients of constraints at the new iterate.
\item[\tt p]      Penalty parameters for nonlinear equality constraints at
                  the new iterate.
\item[\tt multipliers] Multiplier estimates ordered as $\xi$'s, 
        $\lambda$'s, $\mu$'s, and $\zeta$'s (from quadratic program
        computing $d^0_k$). $\lambda _j \geq 0~~\forall j=1,\ldots,t_i$
        and $\mu _j \ge 0~~\forall j=1,\ldots,t_e$. $\xi _i > 0$
        indicates that $x_i$ reaches its upper bound and $\xi _i <0$
        indicates that $x_i$ reaches its lower bound. When
        (in {\tt mode}) ${\tt A}=0$ and ${\tt nf}>1$, $\zeta _i \geq0$.
        When (in {\tt mode}) ${\tt A}=1$, $\zeta _i >0$ refers to
        $+f_i(x)$ and $\zeta _i<0$ to $-f_i(x)$. 
       (cf.\ \S 3 under item {\tt w}.)
\item[\tt hess]   Estimate of the Hessian matrix of the Lagrangian.
\item[\tt Ck]     The value $C_k$ as defined in Algorithm FSQP-NL.
\end{description}

\section{User-Supplied Subroutines}
At least two of the following four Fortran 77 subroutines, 
namely {\tt obj} and {\tt constr}, 
must be provided by the user in order to define the problem. 
The name of all four routines can be changed at the user's will, 
as they are passed as arguments to FSQPD.

\subsection{Subroutine obj}
The subroutine {\bf obj}, to be provided by the user, 
computes the value of the objective functions. 
A (dummy) subroutine must be provided due to Fortran 77 compiling
requirement if $\mbox{\tt nf}=0$ (This may happen when the user
is only interested in finding a feasible point).
The specification of {\bf obj} for FSQPD is
\begin{quote}
\begin{verbatim}
          subroutine obj(nparam,j,x,fj)
          integer nparam,j
          double precision x(nparam),fj
    c     
    c     for given j, assign to fj the value of the jth objective
    c     evaluated at x 
    c
          return
          end
\end{verbatim}
\end{quote}
\noindent Arguments:
\begin{description}
\item[\tt nparam] {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]      {\bf (Input)}~Number of the objective to be computed.
\item[\tt x]      {\bf (Input)}~Current iterate.
\item[\tt fj]     {\bf (Output)}~Value of the {\tt j}th objective function
                  at {\tt x}.
\end{description}

\subsection{Subroutine constr}
The subroutine {\bf constr}, to be provided by the user, 
computes the value of 
the constraints. If there are no constraints, 
a (dummy) subroutine must be
provided anyway due to Fortran 77 compiling requirement.
The specification of {\tt constr} for FSQPD is as follows
\begin{quote}
\begin{verbatim}
         subroutine constr(nparam,j,x,gj)
         integer nparam,j
         double precision x(nparam),gj
   c
   c     for given j, assign to gj the value of the jth constraint 
   c     evaluated at x
   c
         return
         end
\end{verbatim}
\end{quote}
\noindent Arguments:
\begin{description}
\item[\tt nparam]  {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]    {\bf (Input)}~Number of the constraint to be computed. 
\item[\tt x]    {\bf (Input)}~Current iterate.
\item[\tt gj]   {\bf (Output)}~Value of the {\tt j}th constraint at {\tt x}.
\end{description}
\bigskip
The order of the constraints must be as follows. 
First the {\tt nineqn} (possibly zero) nonlinear inequality constraints. 
Then the ${\tt nineq-nineqn}$ (possibly zero) linear inequality constraints. 
Finally, the {\tt neqn} (possibly zero) nonlinear equality constraints
followed by the ${\tt neq-neqn}$ (possibly zero) linear equality constraints.

\subsection{Subroutine gradob} 
The subroutine {\bf gradob} computes the gradients of the 
objective functions.
The user may omit to provide this routine and require that 
forward finite difference
approximation be used by FSQPD via calling {\tt grobfd} instead~
(see argument {\tt gradob} of FSQPD in \S 3).
The specification of {\tt gradob} for FSQPD is as follows
\begin{quote}
\begin{verbatim}
      subroutine gradob(nparam,j,x,gradfj,dummy)
      integer nparam,j
      double precision x(nparam),gradfj(nparam)
      double precision dummy
      external dummy
c     
c     assign to gradfj the gradient of the jth objective function 
c     evaluated at x
c
      return
      end
\end{verbatim}
\end{quote}
\noindent{Arguments}:
\begin{description}
\item[\tt nparam] {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]      {\bf (Input)}~Number of objective for 
                  which gradient is to be computed.
\item[\tt x]      {\bf (Input)}~Current iterate.
\item[\tt gradfj]  {\bf (Output)}~Gradient of the {\tt j}th objective 
                   function at x.
\item[\tt dummy]  {\bf (Input)}~Used by {\tt grobfd}.
\end{description}
Note that {\tt  dummy} is 
passed as arguments to {\tt gradob} to allow for forward finite 
difference computation of the gradient.

\subsection{Subroutine gradcn}
The subroutine {\bf gradcn} computes the gradients of the constraints. 
The user may omit to provide this routine and require that forward 
finite difference approximation be used by FSQPD via 
calling {\tt grcnfd} instead (see argument {\tt gradcn} of FSQPD in \S 3).
The specification of {\tt gradcn} for FSQPD is as follows
\begin{quote}
\begin{verbatim}
      subroutine gradcn (nparam,j,x,gradgj,dummy)
      integer nparam,j
      double precision x(nparam),gradgj(nparam)
      double precision dummy
      external dummy
c
c     assign to gradgj the gradient of the jth constraint
c     evaluated at x
c
      return
      end
\end{verbatim}
\end{quote}
\noindent{Arguments}:
\begin{description}
\item[\tt nparam]  {\bf (Input)}~Dimension of {\tt x}.
\item[\tt j]     {\bf (Input)}~Number of constraint for which
                               gradient is to be computed. 
\item[\tt x]       {\bf (Input)}~Current iterate.
\item[\tt gradgj]   {\bf (Output)}~Gradient of the {\tt j}th
                    constraint evaluated at {\tt x}.
\item[\tt dummy]  {\bf (Input)}~Used by {\tt grcnfd}.
\end{description}

\noindent Note that {\tt  dummy} is passed as arguments
to {\tt gradcn} to allow for forward finite difference 
computation of the gradients.

\section{Organization of FSQPD and Main Subroutines}
\subsection{Main Subroutines}
FSQPD first checks for inconsistencies of input parameters using the 
subroutine {\tt check}. It then checks if the starting 
point given by the user satisfies the linear 
constraints and if not, generates a point 
satisfying these constraints using
subroutine {\tt initpt}. It then calls FSQPD1 for generating a point
satisfying linear and nonlinear inequality constraints. Finally, 
it attempts to find
a point satisfying the optimality condition using again FSQPD1.
\begin{description}
\item[\tt check] Check that all upper bounds on variables 
                 are no smaller than lower bounds; 
                 check that all input integers are nonnegative
                 and appropriate (${\tt nineq} \geq {\tt nineqn}$, etc.);
                 and check that {\tt eps} ($\epsilon$) 
                 and (if ${\tt neqn}\ne 0$) {\tt epseqn} 
                 ($\epsilon_e$) are at least as large as 
                 the machine precision {\tt epsmac} (computed by FSQPD).
\item[\tt initpt] Attempt to generate a feasible point satisfying 
                 simple bounds and all linear constraints.
\item[\tt FSQPD1] Main subroutine used possibly twice by FSQPD, 
                  first for generating
                  a feasible iterate as explained at the 
                  end of \S 2 and
                  second for generating an optimal iterate 
                  from that feasible iterate.
\end{description}
FSQPD1 uses the following subroutines:
\begin{description}
\item[\tt dir] Compute various directions $d_k^0$, $d^1_0$ and $\tilde d_k$.
\item[\tt step]Compute a step size along a certain search direction. 
               It is also called to check if $x_k+d_k^\ell$ is acceptable 
               in {\it Step 1 v} of Algorithm FSQP-NL.
\item[\tt hesian] Perform the Hessian matrix updating.
\item[\tt out] Print the output for ${\tt iprint=1}$ 
                  or ${\tt iprint}=2$.
\item[\tt grobfd] (optional)~Compute the gradient of an objective 
                function 
                by forward finite differences with mesh size equal to 
$\mbox{sign}(x^i)\times\max \{{\tt udelta},~
               {\tt rteps}\times\max (1,\,|x^i|)\}$~
                for each component $x^i$ of $x$ ({\tt rteps} is the 
                square root of {\tt epsmac}, the machine 
                precision computed by FSQPD).
\item[\tt grcnfd]  (optional)~Compute the gradient of a constraint by 
                   forward finite differences with mesh size equal to 
$\mbox{sign}(x^i)\times\max \{{\tt udelta},~
                {\tt rteps}\times\max (1,\,|x^i|)\}$~
                for each component $x^i$ of $x$ ({\tt rteps} is the 
                square root of {\tt epsmac}, the machine 
                precision computed by FSQPD).
\end{description}

\subsection{Other Subroutines}
In addition to QLD, the following subroutines are used:
\begin{verbatim}
  diagnl  di1     dqp     error   estlam  fool    fuscmp  indexs  matrcp
  matrvc  nullvc  resign  sbout1  sbout2  scaprd  shift   slope   small
\end{verbatim}

\subsection{Reserved Common Blocks}
The following named common blocks are used in FSQPD and QLD:
\begin{verbatim}
  fsqpp1  fsqpp2  fsqpp3  fsqpq1  fsqpq2  fsqplo  fsqpqp  fsqpst  CMACHE
\end{verbatim}


\input manua2