A local Jacobian smoothing method for solving Nonlinear Complementarity Problems

In this paper, we present a smoothing of a family of nonlinear complementarity functions and use its properties in combination with the smooth Jacobian strategy to present a new generalized Newton-type algorithm to solve a nonsmooth system of equations equivalent to the Nonlinear Complementarity Problem. In addition, we prove that the algorithm converges locally and q -quadratically, and analyze its numerical performance.


Introduction
Let F : n → n be a continuously differentiable function. The Nonlinear Complementarity Problem, NCP for short, is to find a vector x ∈ n satisfying the following conditions We say that a vector in n is nonnegative if each of its components is nonnegative.
There are numerous and varied applications of the NCP in Engineering [1,2] and Economics [3,4]. In this last area, complementarity and economic equilibrium are synonymous.
One technique, perhaps the most popular, to solve nonlinear complementarity problems is to write them in an equivalent way as a nonlinear system of equations. In this process, it is used a function ϕ : 2 → such that called a complementarity function [5].
The equivalence (1) allows to conclude that a complementarity function is nondifferentiable due to the lack of smoothness of its trace by the intersection with the xy plane which is not differentiable at (0, 0).
Given a complementarity function ϕ and a function Φ: n → n , we define the nonlinear system of equations by which is nondifferentiable due to the lack of smoothness of ϕ. From (1), it follows that a necessary and sufficient condition for a vector x * to solve the NCP is that this vector solves the system (2).
Two examples of complementarity functions, widely used, are the following which are called the Minimum function [6] and the Fischer-Burmeister function [7], respectively.
Other example of complementarity functions is the family ϕ λ introduced in [7] and defined by where λ ∈ (0, 4) .
It is important to observe that the Minimum function [6] and the Fischer-Burmeister function are particular cases of the family ϕ λ .
The nonsmooth system of nonlinear equations (2), equivalently the NCP, has been solved using nonsmooth methods of Newton [8] and quasi-Newton [9,10,11,12] type, and smooth methods [13,14,15]. These methods are based on Clarke's generalized Jacobian [16] defined by a Lipschitz continuous function G : n → n as follows where D G denotes the set of all points where G is differentiable and hull(A) is the convex envelope of A. In general, this set is difficult to compute [17]; in this paper, we use the overestimation given by [18, Proposition 2.6.2 (e)], where the right-hand side (often easier to compute [19]) denotes the set of matrices in n×n whose i -th column is given by the generalized gradient of the i-th component function G i . The set ∂ C G(x) is called the C-subdifferential of G at x.
The nonsmooth Newton methods [8,19] solve at each iteration the generalized Newton equation where A way to deal with the nonsmoothness of Φ and to solve (2) is to use a Jacobian smoothing method introduced in [20]. The basic idea of this type of methods is to approximate Φ by a smooth operator Φ µ : n → n , where µ > 0 denotes the smoothing parameter, and then to solve a sequence of problems forcing µ to go to zero. For this, a Jacobian smoothing method tries to solve at each iteration the mixed Newton equation where Φ µ (x k ) is the Jacobian matrix of the function Φ µ at x k . The linear system (7) uses the unperturbed right-hand side of equation (5), but it replaces the H k ∈ ∂ Φ λ (x k ) by a suitable approximation Φ µ (x k ).
The authors in [20] developed the convergence theory of Jacobian smoothing methods for a special type of functions. A new algorithm for general functions was presented in [17], where the authors use the strategy of JAcobian smoothing to solve the NCP by reformulating it as a system of non-linear equations using the Fischer-Burmeister complementarity function.
The good results obtained in [17] and the fact that the family of functions (3) has not been used in connection with the Jacobian smoothing method motivated us to use that strategy to solve the NCP through its reformulation as a non-differentiable non-linear system, using a one-parameter family of complementarity functions (3), which we have analyzed and used recently in [21,22]. That is, we consider the nonsmooth nonlinear system of equations The function Φ λ is locally Lipschitz continuous because of the Lipschitz continuity of ϕ λ (see [23]). Therefore, ∂ Φ λ (x) exists.
In this paper, we propose and analyze a smoothing of the family of nonlinear complementarity functions proposed in [7] and we use its properties in combination with the smooth Jacobian strategy used in [17] to present a new generalized Newton-type algorithm to solve a nonsmooth system of equations equivalent to the Nonlinear Complementarity Problem. In addition, we prove that the algorithm converges locally and q-quadratically, analyzing also its numerical performance.
The organization of this paper is as follows: In Section 2, we present a smoothing of a one-parameter family of complementarity functions (3), and analyze its properties. In Section 3, we present a new algorithm that uses the Jacobian smoothing strategy and develop its local convergence theory. In Section 4, we present numerical experiments which permit us to analyze the performance of the proposed algorithm. Finally, in Section 5, we present our concluding remarks.

Smoothing a family of complementarity functions
In this section, we define and analyze a smooth approximation of the family of complementarity functions (3) introduced in [7], which was redefined and analyzed in detail in [23]. Following this reference, we use the G λ (a, b ) notation for the first term on the right side of (3). That is, G λ (a, b ) = (a − b ) 2 + λab .
Definition 2.1. The function ϕ λµ given by is a smooth approximation of a family of complementarity functions ϕ λ .
The function ϕ λµ is well defined for all λ ∈ (0, 4) and for all µ > 0. Indeed, is a symmetric and positive definite matrix [23].
The following lemma guarantees that ϕ λµ is a "perturbed" complementarity function.
Proof. It is a direct consequence of the definition of ϕ λµ .
A useful bound for later theoretical developments is given in the following lemma.
Proof. Let α min > 0 be the smallest eigenvalue of the matrix K in (10), then where the last inequality is given by [23,Lemma 1].
Observe that the function ϕ λ is nondifferentiable at (0, 0) but its smoothing function ϕ λµ is differentiable, and its gradient vector is defined by For later use, we will denote the partial derivatives of G λµ as follows and the derivatives of G λ by From (13), (16) and the triangle inequality, we have Following [23], we have a matrix expression for ∇G λµ (a, b ) analogous to the one given for ∇G λ (a, b ), namely where K is the matrix given by (10) which satisfies K 2 < 2.
An important property of the smoothing function ϕ λµ is that it is an uniformly continuous function; moreover, it is Lipschitz continuous as guaranteed by the following lemma.

That
Proof. Let x, y ∈ 2 . By the Mean Value Theorem, there exists a vector z in (x,y) such that, using a Cauchy-Schwartz inequality and the bound (17), we have thus, we conclude that ϕ λµ is Lipschitz continuous with constant 2 2 .
Corollary 2.5. The function G λµ is Lipschitz continuous, that is, for all possible x, y ∈ 2 , we have the following bound Proof. It is analogous to the proof of the previous lemma.

Algorithm and convergence theory
Following the idea of Jacobian smoothing methods presented in the introduction of this paper, we propose the following basic algorithm for solving the nonsmooth nonlinear system of equations (8).
It is important to observe that we use the matrix The first result characterizes the matrices of the C-subdifferential of Φ λ at x.
For an arbitrary x ∈ n , we have are diagonal matrices whose i-th diagonal element is given by Proof. It follows directly from the definition of the C-subdifferential and from [7, Proposition 2.5].
By the differentiabilty of Φ λµ at x k , the Jacobian matrix Φ λµ (x k ) is given by with e 1 , . . . , e n , the canonical vectors in n , and where α λµ and β λµ are the functions defined by (14), that is The next lemma guarantees that, as the parameter µ tends to zero, the distance between the matrix Φ λµ (x) and the set ∂ C Φ λ (x) also tends to zero. Therefore, it is reasonable to replace the generalized Newton iteration (5) by the iteration (22).
. Let x ∈ n be arbitrary but fixed and µ > 0 . Then we have Proof. Since . Thus, the infimun is zero and (23) is satisfied.
The following hypotheses allow to prove that the Algorithm 3.1 is well defined and converges to a solution of (8). H1. The system (8) has a solution x * ∈ n .
H2. The Jacobian matrix of F is locally Lipschitz continuous.
Next, we present a technical lemma that will be useful in the proof of Lemma 3.5.
Lemma 3.4. Assume H1 and H3, and r ∈ (0, 1). There exists a positive constant ε such that, if x − x * ∞ < ε then the function defined by is well defined, and Proof. In order to prove that Φ λµ (x) is nonsingular, we use the Banach's Lemma [14]. To do this, we find a bound for Using the definition of C-subdifferential, we have that the matrix H has the form where the sequence y k converges to x * and satisfies that Φ λ (y k ) exists.
We bound Φ λµ (x) − H * using the definition of infinite matrix norm. Thus, for some j ∈ {1, 2, ..., n}, we have By Lemma 2.7, we have By de continuity of F , for all ε, there exists δ > 0 such that, if we have We consider the two possibilities for the maximum in (28).  [14]) and the function is well defined. In addition, In the second part of the proof, we prove (26). For this, subtracting x * in (25), using · ∞ , and performing some algebraic manipulations, we have Moreover, by Theorem 2.3 in [7], for all ρ > 0, there exists ε 2 > 0, such that, , there exists ε r > 0 such that, if x − x * ∞ < ε r In particular, for ρ < then On the other hand, adding and subtracting H * in Φ λµ (x) − H ∞ and using the triangle inequality, we have By (28), a bound for the first term of (33) is * , 2 n η M , where η and by [23, Lemma 4.2] the second term is bounded by is the Lipschitz constant of ∇ϕ λ . Then, Moreover, where the last inequality is given by [ with θ = 2 n (τ + η) ζ¯. Thus, from (29), (32), and (35), Therefore, with c = 2 β (θ + γ ). The following lemma guarantees that the proposed algorithm is well defined, it converges linearly and it gives a sufficient condition for q-quadratic convergence.
Lemma 3.5. There exists a positive ε 0 such that, if x 0 − x * ∞ < ε 0 then generates a well defined sequence {x k } which converges to x * , and given r ∈ (0, 1), satisfies Moreover, if the Jacobian matrix of F satisfies the assumption H2 then where c is the constant of Lemma 3.4.

Numerical experiments
In this section, we analyze numerically the Jacobian smoothing algorithm introduced in the previous section (Algorithm 1). To do this, we compared our algorithm with the one proposed in [17] (which is a particular case of our algorithm when λ = 2 ), which we call Algorithm 2, and with a Newton method that uses matrices in the C-subdifferential of Φ λ at x k .
We use five test problems, four of them were chosen from [6]. The fifth function is [24, Example A]. We describe each of these functions below, as well as the initial point used ( x 0 ) and the solutions found ( x * ).

Example A in [24]
. F : 5 → 5 be defined by To write the codes of the algorithms and the test functions, we used the software M A T L A B ® and we performed the numerical tests on a computer with a processor Intel(R) Core(TM) i5-4200M CPU @2.50 GHZ. We used as convergence criterion Φ λ (x k ) < 10 −6 or Φ λµ (x k ) < 10 −6 , according to the case. We declared divergence if the number of iterations exceeded 500.
For each problem and with each algorithm, we compared the number of iterations and the time (in seconds) used by the algorithm to obtain convergence for each value of λ. The results obtained are presented below in ten graphics. Fig. 1 shows that the Newton method converges only for λ ≥ 3.3, with more iterations than the proposed algorithm, which converges for λ ≥ 2, and the number of iterations is the same for the two sequences {µ k } used, therefore its graphics match (Fig. 1A). On the other hand, Algorithm 2 diverges. In addition, Algorithm 1 is the most efficient.     2 shows that convergence was obtained for all the algorithms and all the values of λ. The number of iterations for the Newton method is slightly smaller than the others ones ( Fig. 2A); the proposed smoothing has a fairly competitive behavior. In terms of computational time, the proposed algorithm lies between the generalized Newton method and the Algorithm 2 (Fig. 2B).  In Fig.3, we observe that the generalized Newton method converges for λ ≥ 1.8, and the proposed algorithm converges for λ ≥ 1.5; in terms of number of iterations and convergence time, these two methods have a similar behavior for 1.5 ≤ λ ≤ 3.1. For λ > 3.1 the proposed algorithm is the best.  Finally, in Fig.5, we observe convergence of all the considered methods, for all values of λ . Again, it is shown that the proposed algorithm, Algorithm 1, is quite competitive with respect to the Newton method. On the other hand, in our proposal, the selection of the sequence µ k has an important role in the convergence rate.

Concluding remarks
In this paper, we propose a new generalized Newton-type algorithm for solving Nonlinear Complementarity Problems based on its reformulation as a nonsmooth system of equations. To do this, we introduce a smoothing of the family of nonlinear complementarity functions presented in [7] and analyze its properties in combination with the smooth Jacobian strategy used in [17]. We prove local and quadratic convergence for the new algorithm.
We present preliminary numerical tests that show a competitive numerical performance of the proposed algorithm compared to the traditional generalized Newton method and the Jacobian smoothing Newton method that uses the Fischer-Burmeister complementarity function, which is a particular case of the family that we consider in our proposal. These numerical tests allow us to observe that the choice of the sequence µ k plays an important role in the convergence of the algorithm.
We consider that more numerical tests are needed varying the choice of λ as well as {µ k }. Moreover, the globalization of the proposed algorithm with its global convergence analysis is also needed.