Skip to content

Commit a8f82b1

Browse files
fix typos in Readme
1 parent 21e1373 commit a8f82b1

File tree

1 file changed

+64
-62
lines changed

1 file changed

+64
-62
lines changed

NLDF/Readme.md

Lines changed: 64 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@
99

1010
### NAG Library Install
1111

12-
To run this example, you will need to install the NAG Library (Mark 28.5 or newer) and a license key. You will also need the NAG Library for Java wrappers. You can find the software, wrappers, and request a license key from our website here: [Getting Started with NAG Library](https://www.nag.com/content/getting-started-nag-library)
12+
To run this example, you will need to install the NAG Library (Mark 28.5 or newer) and a license key. You will also need the NAG Library for Java wrappers. You can find the software, wrappers, and request a license key from our website here: [Getting Started with NAG Library](https://www.nag.com/content/getting-started-nag-library?lang=jv)
1313

1414
### Using These Example Files
1515

16-
This directory contains a Java source code and a couple images files to help illustrate this example. With NAG Library and wrappers properly installed, the Java file can be compiled and run as it is to produce a data set and execute all the example scenarios described below.
16+
This directory contains the [Java source code](./GenDFEx.java) and a couple images files to help illustrate this example. With NAG Library and wrappers properly installed, the Java file can be compiled and run as it is to produce a data set and execute all the example scenarios described below.
1717

1818

1919
## Introduction
@@ -34,11 +34,11 @@ import com.nag.routines.E04.E04GN; // nagf_opt_handle_solve_nldf
3434
import com.nag.routines.E04.E04GNU; // monit
3535
import com.nag.routines.E04.E04GNX; // confun dummy
3636
import com.nag.routines.E04.E04GNY; // congrd dummy
37-
import com.nag.routines.E04.E04RA; // Handle init
38-
import com.nag.routines.E04.E04RH; //box bounds
37+
import com.nag.routines.E04.E04RA; // handle init
38+
import com.nag.routines.E04.E04RH; // box bounds
3939
import com.nag.routines.E04.E04RJ; // linear constraints
4040
import com.nag.routines.E04.E04RM; // add model and residual sparsity structure
41-
import com.nag.routines.E04.E04RZ; //destroy handle
41+
import com.nag.routines.E04.E04RZ; // destroy handle
4242
import com.nag.routines.E04.E04ZM; // optional parameters
4343

4444
import java.lang.Math;
@@ -48,74 +48,74 @@ import java.util.Arrays;
4848
### Utility Functions and Setup
4949
We need to define a few dummy functions required by the Generalized Data Fitting solver interface
5050
```java
51-
private static class CONFUN extends E04GNX implements E04GN.E04GN_CONFUN {
51+
public static class CONFUN extends E04GN.Abstract_E04GN_CONFUN {
5252
public void eval(){
53-
super.eval();
53+
this.eval();
5454
}
5555
}
5656

57-
private static class CONGRD extends E04GNY implements E04GN.E04GN_CONGRD {
57+
public static class CONGRD extends E04GN.Abstract_E04GN_CONGRD {
5858
public void eval(){
59-
super.eval();
59+
this.eval();
6060
}
6161
}
6262

63-
private static class MONIT extends E04GNU implements E04GN.E04GN_MONIT {
63+
public static class MONIT extends E04GN.Abstract_E04GN_MONIT {
6464
public void eval(){
65-
super.eval();
65+
this.eval();
6666
}
6767
}
6868
```
6969

7070
Inside our 'main', we will initialize all our variables and create our handle.
7171
```java
72-
E04GN e04gn = new E04GN(); // the solver
73-
E04RA e04ra = new E04RA(); // the handle initializer
74-
E04RM e04rm = new E04RM(); // for setting model and residual sparsity structure
75-
E04ZM e04zm = new E04ZM(); // for setting optional parameters
76-
E04RZ e04rz = new E04RZ(); // handle destroyer
72+
E04GN e04gn = new E04GN(); // the solver
73+
E04RA e04ra = new E04RA(); // the handle initializer
74+
E04RM e04rm = new E04RM(); // for setting model and residual sparsity structure
75+
E04ZM e04zm = new E04ZM(); // for setting optional parameters
76+
E04RZ e04rz = new E04RZ(); // handle destroyer
7777

7878

79-
MONIT monit = new MONIT(); // defined below using E04GNU
80-
CONFUN confun = new CONFUN(); // defined below using E04GNX (dummy)
81-
CONGRD congrd = new CONGRD(); // defined below using E04GNY (dummy)
79+
MONIT monit = new MONIT(); // defined below using E04GNU
80+
CONFUN confun = new CONFUN(); // defined below using E04GNX (dummy)
81+
CONGRD congrd = new CONGRD(); // defined below using E04GNY (dummy)
8282

83-
double [] x = new double [2]; // instatiate an array for as many variable you need
84-
long handle = 0;
85-
int nvar = x.length;
86-
int ifail;
87-
int nres = t.length;
88-
89-
// Init for sparsity structure
90-
int isparse = 0;
91-
int nnzrd = 0;
92-
int [] irowrd = new int [nnzrd];
93-
int [] icolrd = new int [nnzrd];
94-
95-
// Get handle
96-
ifail = 0;
97-
e04ra.eval(handle, nvar, ifail);
98-
handle = e04ra.getHANDLE();
99-
100-
// define the residual functions and sparsity structure
101-
ifail = 0;
102-
e04rm.eval(handle, nres, isparse, nnzrd, irowrd, icolrd, ifail);
103-
104-
// Set options
105-
ifail = 0;
106-
e04zm.eval(handle, "NLDF Loss Function Type = L2", ifail);
107-
e04zm.eval(handle, "Print Level = 1", ifail);
108-
e04zm.eval(handle, "Print Options = No", ifail);
109-
e04zm.eval(handle, "Print Solution = Yes", ifail);
110-
111-
// Initialize all the remaining parameters
112-
LSQFUN lsqfun = new LSQFUN();
113-
LSQGRD lsqgrd = new LSQGRD();
114-
double [] rx = new double[nres];
115-
double [] rinfo = new double[100];
116-
double [] stats = new double [100];
117-
int [] iuser = new int[0];
118-
long cpuser = 0;
83+
double [] x = new double [2]; // instatiate an array for as many variable you need
84+
long handle = 0;
85+
int nvar = x.length;
86+
int ifail;
87+
int nres = t.length;
88+
89+
// Init for sparsity structure
90+
int isparse = 0;
91+
int nnzrd = 0;
92+
int [] irowrd = new int [nnzrd];
93+
int [] icolrd = new int [nnzrd];
94+
95+
// Get handle
96+
ifail = 0;
97+
e04ra.eval(handle, nvar, ifail);
98+
handle = e04ra.getHANDLE();
99+
100+
// define the residual functions and sparsity structure
101+
ifail = 0;
102+
e04rm.eval(handle, nres, isparse, nnzrd, irowrd, icolrd, ifail);
103+
104+
// Set options
105+
ifail = 0;
106+
e04zm.eval(handle, "NLDF Loss Function Type = L2", ifail);
107+
e04zm.eval(handle, "Print Level = 1", ifail);
108+
e04zm.eval(handle, "Print Options = No", ifail);
109+
e04zm.eval(handle, "Print Solution = Yes", ifail);
110+
111+
// Initialize all the remaining parameters
112+
LSQFUN lsqfun = new LSQFUN();
113+
LSQGRD lsqgrd = new LSQGRD();
114+
double [] rx = new double[nres];
115+
double [] rinfo = new double[100];
116+
double [] stats = new double [100];
117+
int [] iuser = new int[0];
118+
long cpuser = 0;
119119
```
120120

121121
We also define $$t$$ as an array of 21 points from $$0.5$$ to $$2.5$$.
@@ -127,7 +127,7 @@ To investigate the robustness aspect, here’s a toy dataset which is generated
127127
![toy1](images/fig1.png)
128128

129129
```java
130-
private static double[] toydata1(double [] t) {
130+
public static double[] toydata1(double [] t) {
131131
double [] y = new double[t.length * 2];
132132
for(int i = 0; i < t.length * 2; i++){
133133
if(i < t.length){
@@ -154,15 +154,15 @@ using a variety of loss functions provided by NAG’s data-fitting solver **hand
154154

155155
To set up this model, we must define it and its gradient inside the functions `LSQFUN` and `LSQGRD`
156156
```java
157-
private static class LSQFUN extends E04GN.Abstract_E04GN_LSQFUN {
157+
public static class LSQFUN extends E04GN.Abstract_E04GN_LSQFUN {
158158
public void eval() {
159159
for(int i = 0; i < NRES; i++){
160160
this.RX[i] = RUSER[NRES + i] - X[0] * Math.sin(X[1] * RUSER[i]);
161161
}
162162
}
163163
}
164164

165-
private static class LSQGRD extends E04GN.Abstract_E04GN_LSQGRD {
165+
public static class LSQGRD extends E04GN.Abstract_E04GN_LSQGRD {
166166
public void eval() {
167167
for(int i = 0; i < NRES; i++){
168168
this.RDX[i * NVAR] = (-1 * Math.sin(X[1]*RUSER[i]));
@@ -190,7 +190,7 @@ to see what this outlier does to the minimum.
190190
For this Java example, we set up a function to reset $$x$$ variable to the starting point, since it gets passed to the solver and returns the solution.
191191

192192
```java
193-
private static double[] init_x() {
193+
public static double[] init_x() {
194194
double [] x = new double [] {2.1,1.4};
195195
return x;
196196
}
@@ -269,7 +269,7 @@ To illustrate this, here’s a new dataset which we will try to fit with the sam
269269
![toy2](images/fig4.png)
270270

271271
```java
272-
private static double[] toydata2(double [] t) {
272+
public static double[] toydata2(double [] t) {
273273
double [] y = new double[t.length * 2];
274274
for(int i = 0; i < t.length * 2; i++){
275275
if(i < t.length){
@@ -286,7 +286,7 @@ private static double[] toydata2(double [] t) {
286286
}
287287
```
288288

289-
We will fit this data set using 3 different loss functions with the same model $$\varphi(t;x)$$ each time and discuss the results under the plots all at once below.
289+
We will fit this data set using 3 different loss functions with the same model $$\varphi(t;x)$$ each time and discuss the results under the plots all at once below. And we'll lastly clear the handle completely.
290290

291291
```java
292292
ifail = 0;
@@ -306,6 +306,8 @@ x = init_x();
306306
e04zm.eval(handle, "NLDF Loss Function Type = ATAN", ifail);
307307
e04gn.eval(handle, lsqfun, lsqgrd, confun, congrd, monit, nvar, x, nres, rx, rinfo,
308308
stats, iuser, ruser2, cpuser, ifail);
309+
310+
e04rz.eval(handle,ifail); // Destroy the handle
309311
```
310312

311313
Here are all the curves plotted together:
@@ -327,7 +329,7 @@ $$
327329
The fitted model and corresponding contour plot for the $$\arctan$$ case are in the middle. Here, there are eight local minima in the contour plot for $$\arctan$$ loss, with seven of them being substantially worse solutions than the global minimum, and it is one of these we’ve converged to. Therefore, in this case the selection of initial estimation of the parameters is much more important.
328330

329331
The model fitted with $$l_1$$-norm loss and the corresponding contour plot are in the right column. Looking at the contour plot, there are still a few local minima that do not correspond to the optimal solution, but the starting point of $$x = (2.1,1.4)$$ still converges to the global minimum, which lies at
330-
$$x = (5,1)$$, meaning the part of the dataset generated from $$\sin(t)$$ is effectively being ignoring. From the plots of the loss functions, we can see that $$l_1$$-norm loss is more robust than $$l_2$$-norm loss but less so than $$\arctan$$ loss.
332+
$$x = (5,1)$$, meaning the part of the dataset generated from $$\sin(t)$$ is effectively being ignored. From the plots of the loss functions, we can see that $$l_1$$-norm loss is more robust than $$l_2$$-norm loss but less so than $$\arctan$$ loss.
331333

332334
So, what has happened in each case is: using $$l_2$$-norm loss, we move to the global minimum which is affected by the whole dataset. Using $$l_1$$-norm loss, we move to the global minimum which fits most of the data very well and ignores a small portion, treating them as outliers. Using $$\arctan$$ loss we move to a local minimum which ignores a large portion of the data (treating them as outliers) and fits a small amount of data very well.
333335

0 commit comments

Comments
 (0)