Kstars

curvefit.cpp
1 #include "curvefit.h"
2 #include "ekos/ekos.h"
3 #include <ekos_focus_debug.h>
4 
5 // Constants used to identify the number of parameters used for different curve types
6 constexpr int NUM_HYPERBOLA_PARAMS = 4;
7 constexpr int NUM_PARABOLA_PARAMS = 3;
8 // Parameters for the solver
9 // MAX_ITERATIONS is used to limit the number of iterations for the solver.
10 // A value needs to be entered that allows a solution to be found without iterating unnecessarily
11 // There is a relationship with the tolerance parameters that follow.
12 constexpr int MAX_ITERATIONS = 500;
13 // The next 2 parameters are used as tolerance for convergence
14 // convergence is achieved if for each datapoint i
15 // dx_i < INEPSABS + (INEPSREL * x_i)
16 constexpr double INEPSABS = 1e-5;
17 const double INEPSREL = pow(GSL_DBL_EPSILON, 1.0 / 3.0);
18 
19 // The functions here fit a number of different curves to the incoming data points using the Lehvensberg-Marquart
20 // solver as provided the the Gnu Science Library (GSL). The following sources of information are useful:
21 // www.wikipedia.org/wiki/Levenberg–Marquardt_algorithm - overview of the mathematics behind the algo
22 // www.gnu.org/software/gsl/doc/html/nls.html - GSL manual describing Nonlinear Least-Squares fitting.
23 //
24 // Levensberg-Marquart (LM)
25 // ------------------------
26 // The Levensberg-Marquart algorithm is a non-linear least-squares solver and thus suitable for mnay
27 // different equations. The basic is idea is to adjust the equation y = f(x,P) so that the computed
28 // y values are as close as possible to the y values of the datapoints provided, so that the resultant
29 // curve fits the data as best as it can. P is a set of parameters that are varied by the solver in order
30 // to find the best fit. The solver measures how far away the curve is at each data point, squares the
31 // result and adds them all up. This is the number to be minimised, lets call is S.
32 //
33 // The solver is supplied with an initial guess for the parameters, P. It calculates S, makes an adjustment
34 // P and calculates a new S1. Provided S1 < S the solver has moved in the right direction and reduced S.
35 // It iterates through this procedure until S1 - S is:
36 // 1. less than a supplied limit (convergence has been reached), or
37 // 2. The maximum number of iterations has been reached, or
38 // 3. The solver encountered an error.
39 //
40 // The solver is capable of solving either an unweighted or weighted set of datapoints. In essence, an
41 // unweighted set of data gives equal weight to each datapoint when trying to fit a curve. An alternative is to
42 // weight each datapoint with a measure that corresponds to how accurate the measurement of the datapoint
43 // actually is. Given we are calculating HFRs of stars we can calculate the variance (standard deviation
44 // squared) in the measurement of HFR and use 1 / SD^2 as weighting. What this means is that if we have a
45 // datapoint where the SD in HFR measurements is small we would give this datapoint a relatively high
46 // weighting when fitting the curve, and if there was a datapoint with a higher SD in HFR measurements
47 // it would receive a lower weighting when trying to fit curve to that point.
48 //
49 // There are several optimisations to LM that speed up convergence for certain types of equations. This
50 // code uses the basic LM solver which keeps it generic if more equations are to be added. Of course, if
51 // required this could easily be tweaked. An optimisation that has been implemented is that since, in normal
52 // operation, the solver is run multiple times with the same or similar parameters, the initial guess for
53 // a solver run is set the solution from the previous run.
54 //
55 // The documents referenced provide the maths of the LM algorithm. What is required is the function to be
56 // used f(x) and the derivative of f(x) with respect to the parameters of f(x). This partial derivative forms
57 // a matrix called the Jacobian, J(x). LM then finds a minimum. Mathematically it will find a minimum local
58 // to the initial guess, but not necessarily the global minimum but for the equations used this is not a
59 // problem under normal operation as there will only be 1 minimum. If the data is poor, however, the
60 // solver may find an "n" solution rather than a "u" especially at the start of the process where there
61 // are a small number of points and the error in measurement is high. We'll ignore and continue and this
62 // should correct itself.
63 //
64 // The original algorithm used in Ekos is a quadratic equation, which represents a parabolic curve. In order
65 // not to disturb historic code the original solution will be called Quadratic. It uses a linear least-squares
66 // model, not LM. The LM solver has been applied to a hyperbolic and a parabolic curve.
67 //
68 // Hyperbola
69 // ---------
70 // Equation y = f(x) = b * sqrt(1 + ((x - c) / a) ^2) + d
71 // This can be re-written as:
72 // f(x) = b * phi + d, where phi = sqrt(1 + ((x - c) / a) ^2)
73 // Jacobian J = {df/da, df/db, df/dc, df/db}
74 // df/da = b * (1 / 2) * (1 / phi) * -2 * ((x - c) ^2) / (a ^3)
75 // = -b * ((x - c) ^2) / ((a ^ 3) * phi)
76 // df/db = phi
77 // df/dc = b * (1 / 2) * (1 / phi) * 2 * (x - c) / (a ^2) * (-1)
78 // = -b * (x - c) / ((a ^2) * phi)
79 // df/dd = 1
80 //
81 // For a valid solution:
82 // 1. c must be > 0 and within the range of travel of the focuser.
83 // 2. b must be > 0 for a V shaped curve (b < 0 for an inverted V and b = 0 for a horizontal line).
84 // 3. b + d > 0. The minimum solution when x = c gives f(x) = b + d which must be > 0.
85 //
86 // Parabola
87 // --------
88 // Equation y = f(x) = a + b((x - c) ^2)
89 // Jacobian J = {df/da, df/db, df/dc}
90 // df/da = 1
91 // df/db = (x - c) ^2
92 // df/dc = -2 * b * (x - c)
93 //
94 // For a valid solution:
95 // 1. c must be > 0 and within the range of travel of the focuser.
96 // 2. b must be > 0 for a V shaped curve (b < 0 for an inverted V and b = 0 for a horizontal line).
97 // 3. a > 0. The minimum solution when x = c gives f(x) = a which must be > 0.
98 //
99 // Convergence
100 // -----------
101 // There following are key parameters that drive the LM solver. Currently these settings are coded into
102 // the program. It would be better to store them in a configuration file somewhere, but they aren't the
103 // sort of parameters users should have ready access to.
104 // MAX_ITERATIONS - This is how many iterations to do before aborting. The game here is to set a
105 // reasonable number to allow for convergence. In my testing with good data and a
106 // good start point <10 iterations are required. With good data and a poor start point
107 // <200 iterations are required. With poor data and a tight "tolerance" convergence
108 // may need >1000 iterations. Setting to 500 seems a good compromise.
109 // tolerance - Conceptually tolerance could be done in 2 ways:
110 // - check the gradient, would be 0 at the curve minimum
111 // - check the residuals, will minimise at the curve minimum
112 // Currently we will check on residuals.
113 // This is supported by 2 parameters INEPSABS and INEPSREL.
114 // convergence is achieved if for each datapoint i, where:
115 // dx_i < INEPSABS + (INEPSREL * x_i)
116 // Setting a slack tolerance will mean a range of x (focus positions) that are deemed
117 // valid solutions. Not good.
118 // Setting a tighter tolerance will require more iterations to solve, but setting too
119 // tight a tolerance is just wasting time if it doesn't improve the focus position, and
120 // if too tight the solver may not be able to find the solution, so a balance needs to
121 // be struck. For now lets just make these values constant and see where this gets to.
122 // If this turns out not to work for some equipment, the next step would be to adjust
123 // these parameters based on the equipment profile, or to adapt the parameters starting
124 // with a loose tolerance and tightening as the curve gets nearer to a complete solution.
125 // If we inadvertently overtighten the tolerance and fail to converge, the tolerance
126 // could be slackened or more iterations used.
127 // I have found that the following work well on my equipment and the simulator
128 // for both hyperbola and parabola. The advice in the GSL documentation is to start
129 // with an absolute tolerance of 10^-d where d is the number of digits required in the
130 // solution precision of x (focuser position). The gradient tolerance starting point is
131 // (machine precision)^1/3 which we're using a relative tolerance.
132 // INEPSABS = 1e-5
133 // INEPSREL = GSL_DBL_EPSILON ^ 1/3
134 
135 
136 namespace Ekos
137 {
138 
139 namespace
140 {
141 // Constants used to index m_coefficient arrays
142 enum { A_IDX = 0, B_IDX, C_IDX, D_IDX };
143 
144 // hypPhi() is a repeating part of the function calculations for Hyperbolas.
145 inline double hypPhi(double x, double a, double c)
146 {
147  return sqrt(1.0 + pow(((x - c) / a), 2.0));
148 }
149 
150 // Function to calculate f(x) for a hyperbola
151 // y = b * hypPhi(x, a, c) + d
152 double hypfx(double x, double a, double b, double c, double d)
153 {
154  return b * hypPhi(x, a, c) + d;
155 }
156 
157 // Calculates F(x) for each data point on the hyperbola
158 int hypFx(const gsl_vector * X, void * inParams, gsl_vector * outResultVec)
159 {
160  CurveFitting::DataPointT * DataPoints = ((CurveFitting::DataPointT *)inParams);
161 
162  double a = gsl_vector_get (X, A_IDX);
163  double b = gsl_vector_get (X, B_IDX);
164  double c = gsl_vector_get (X, C_IDX);
165  double d = gsl_vector_get (X, D_IDX);
166 
167  for(int i = 0; i < DataPoints->dps.size(); ++i)
168  {
169  // Hyperbola equation
170  double yi = hypfx(DataPoints->dps[i].x, a, b, c, d);
171 
172  // TODO: Need to understand this a bit more
173  gsl_vector_set(outResultVec, i, (yi - DataPoints->dps[i].y) / DataPoints->dps[i].sigma);
174  }
175 
176  return GSL_SUCCESS;
177 }
178 
179 // Calculates the Jacobian (derivative) matrix for the hyperbola
180 int hypJx(const gsl_vector * X, void * inParams, gsl_matrix * J)
181 {
182  CurveFitting::DataPointT * DataPoints = ((struct CurveFitting::DataPointT *)inParams);
183 
184  // Store current coefficients
185  double a = gsl_vector_get(X, A_IDX);
186  double b = gsl_vector_get(X, B_IDX);
187  double c = gsl_vector_get(X, C_IDX);
188 
189  // Store non-changing calculations
190  const double a2 = a * a;
191  const double a3 = a * a2;
192 
193  for(int i = 0; i < DataPoints->dps.size(); ++i)
194  {
195  // Calculate the Jacobian Matrix
196  const double oneBySigma = 1.0f / DataPoints->dps[i].sigma;
197  const double x = DataPoints->dps[i].x;
198  const double x_minus_c = x - c;
199 
200  gsl_matrix_set(J, i, A_IDX, -oneBySigma * b * (x_minus_c * x_minus_c) / (a3 * hypPhi(x, a, c)));
201  gsl_matrix_set(J, i, B_IDX, oneBySigma * hypPhi(x, a, c));
202  gsl_matrix_set(J, i, C_IDX, -oneBySigma * b * x_minus_c / (a2 * hypPhi(x, a, c)));
203  gsl_matrix_set(J, i, D_IDX, oneBySigma);
204  }
205 
206  return GSL_SUCCESS;
207 }
208 
209 // Invokes F(x) and J(x). Seems to be an optimisation employed by the GSL library to calc these together
210 int hypFJx(const gsl_vector * x, void * inParams, gsl_vector * f, gsl_matrix * J)
211 {
212  hypFx(x, inParams, f);
213  hypJx(x, inParams, J);
214 
215  return GSL_SUCCESS;
216 }
217 
218 
219 // Function to calculate f(x) for a parabola.
220 double parfx(double x, double a, double b, double c)
221 {
222  return a + b * pow((x - c), 2.0);
223 }
224 
225 // Calculates f(x) for each data point in the parabola.
226 int parFx(const gsl_vector * X, void * inParams, gsl_vector * outResultVec)
227 {
228  CurveFitting::DataPointT * DataPoint = ((struct CurveFitting::DataPointT *)inParams);
229 
230  double a = gsl_vector_get (X, A_IDX);
231  double b = gsl_vector_get (X, B_IDX);
232  double c = gsl_vector_get (X, C_IDX);
233 
234  for(int i = 0; i < DataPoint->dps.size(); ++i)
235  {
236  // Parabola equation
237  double yi = parfx(DataPoint->dps[i].x, a, b, c);
238 
239  // TODO: Need to understand this a bit more
240  gsl_vector_set(outResultVec, i, (yi - DataPoint->dps[i].y) / DataPoint->dps[i].sigma);
241  }
242 
243  return GSL_SUCCESS;
244 }
245 
246 // Calculates the Jacobian (derivative) matrix for the parabola equation f(x) = a + b*(x-c)^2
247 // dy/da = 1
248 // dy/db = (x - c)^2
249 // dy/dc = -2b * (x - c)
250 int parJx(const gsl_vector * X, void * inParams, gsl_matrix * J)
251 {
252  CurveFitting::DataPointT * DataPoint = ((struct CurveFitting::DataPointT *)inParams);
253 
254  // Store current coefficients
255  double b = gsl_vector_get(X, B_IDX);
256  double c = gsl_vector_get(X, C_IDX);
257 
258  for(int i = 0; i < DataPoint->dps.size(); ++i)
259  {
260  // Calculate the Jacobian Matrix
261  const double oneBySigma = 1.0f / DataPoint->dps[i].sigma;
262  const double x = DataPoint->dps[i].x;
263 
264  gsl_matrix_set(J, i, 0, oneBySigma);
265  gsl_matrix_set(J, i, 1, oneBySigma * (x - c) * (x - c));
266  gsl_matrix_set(J, i, 2, -oneBySigma * b * (x - c));
267  }
268 
269  return GSL_SUCCESS;
270 }
271 
272 // Invokes F(x) and J(x)
273 int parFJx(const gsl_vector * x, void * inParams, gsl_vector * f, gsl_matrix * J)
274 {
275  parFx(x, inParams, f);
276  parJx(x, inParams, J);
277 
278  return GSL_SUCCESS;
279 }
280 
281 } // namespace
282 
283 CurveFitting::CurveFitting()
284 {
285  // Constructor just initialises variables
286  firstSolverRun = true;
287 }
288 
289 void CurveFitting::fitCurve(const QVector<int> &x_, const QVector<double> &y_, const QVector<double> &sigma_,
290  const CurveFit curveFit, const bool useWeights)
291 {
292  if ((x_.size() != y_.size()) || (x_.size() != sigma_.size()))
293  qCDebug(KSTARS_EKOS_FOCUS) << QString("CurveFitting::CurveFitting inconsistent parameters. x=%1, y=%2, sigma=%3")
294  .arg(x_.size()).arg(y_.size()).arg(sigma_.size());
295 
296  m_x.clear();
297  for (int i = 0; i < x_.size(); ++i)
298  m_x.push_back(static_cast<double>(x_[i]));
299  m_y = y_;
300  m_sigma = sigma_;
301 
302  m_CurveType = curveFit;
303  switch (m_CurveType)
304  {
305  case FOCUS_QUADRATIC :
306  m_coefficients = polynomial_fit(m_x.data(), m_y.data(), m_x.count(), 2);
307  break;
308  case FOCUS_HYPERBOLA :
309  m_coefficients = hyperbola_fit(m_x, m_y, m_sigma, useWeights);
310  break;
311  case FOCUS_PARABOLA :
312  m_coefficients = parabola_fit(m_x, m_y, m_sigma, useWeights);
313  break;
314  default :
315  // Something went wrong, log an error and reset state so solver starts from scratch if called again
316  qCDebug(KSTARS_EKOS_FOCUS) << QString("CurveFitting::CurveFitting called with curveFit=%1").arg(curveFit);
317  firstSolverRun = true;
318  return;
319  }
320  lastCoefficients = m_coefficients;
321  lastCurveType = m_CurveType;
322  firstSolverRun = false;
323 }
324 
325 double CurveFitting::curveFunction(double x, void *params)
326 {
327  CurveFitting *instance = static_cast<CurveFitting *>(params);
328 
329  if (instance && !instance->m_coefficients.empty())
330  return instance->f(x);
331  else
332  return -1;
333 }
334 
335 double CurveFitting::f(double x)
336 {
337  const int order = m_coefficients.size() - 1;
338  double y = 0;
339  if (m_CurveType == FOCUS_QUADRATIC)
340  {
341  for (int i = 0; i <= order; ++i)
342  y += m_coefficients[i] * pow(x, i);
343  }
344  else if (m_CurveType == FOCUS_HYPERBOLA && m_coefficients.size() == NUM_HYPERBOLA_PARAMS)
345  y = hypfx(x, m_coefficients[A_IDX], m_coefficients[B_IDX], m_coefficients[C_IDX], m_coefficients[D_IDX]);
346  else if (m_CurveType == FOCUS_PARABOLA && m_coefficients.size() == NUM_PARABOLA_PARAMS)
347  y = parfx(x, m_coefficients[A_IDX], m_coefficients[B_IDX], m_coefficients[C_IDX]);
348  return y;
349 }
350 
351 QVector<double> CurveFitting::polynomial_fit(const double *const data_x, const double *const data_y, const int n,
352  const int order)
353 {
354  int status = 0;
355  double chisq = 0;
356  QVector<double> vc;
357  gsl_vector *y, *c;
358  gsl_matrix *X, *cov;
359  y = gsl_vector_alloc(n);
360  c = gsl_vector_alloc(order + 1);
361  X = gsl_matrix_alloc(n, order + 1);
362  cov = gsl_matrix_alloc(order + 1, order + 1);
363 
364  for (int i = 0; i < n; i++)
365  {
366  for (int j = 0; j < order + 1; j++)
367  {
368  gsl_matrix_set(X, i, j, pow(data_x[i], j));
369  }
370  gsl_vector_set(y, i, data_y[i]);
371  }
372 
373  // Must turn off error handler or it aborts on error
374  gsl_set_error_handler_off();
375 
376  gsl_multifit_linear_workspace *work = gsl_multifit_linear_alloc(n, order + 1);
377  status = gsl_multifit_linear(X, y, c, cov, &chisq, work);
378 
379  if (status != GSL_SUCCESS)
380  qDebug() << Q_FUNC_INFO << "GSL multifit error:" << gsl_strerror(status);
381  else
382  {
383  gsl_multifit_linear_free(work);
384 
385  for (int i = 0; i < order + 1; i++)
386  {
387  vc.push_back(gsl_vector_get(c, i));
388  }
389  }
390 
391  gsl_vector_free(y);
392  gsl_vector_free(c);
393  gsl_matrix_free(X);
394  gsl_matrix_free(cov);
395 
396  return vc;
397 }
398 
399 QVector<double> CurveFitting::hyperbola_fit(const QVector<double> data_x, const QVector<double> data_y,
400  const QVector<double> data_sigma,
401  const bool useWeights)
402 {
403  QVector<double> vc;
404 
405  DataPointT dataPoints;
406 
407  qCDebug(KSTARS_EKOS_FOCUS) <<
408  QString("Starting Levenberg-Marquardt solver, fit=hyperbola, Iterations= %1, Precision abs/rel=%2/%3...")
409  .arg(MAX_ITERATIONS).arg(INEPSABS).arg(INEPSREL);
410 
411  // Fill in the data to which the curve will be fitted
412  dataPoints.useWeights = useWeights;
413  for (int i = 0; i < data_x.size(); i++)
414  dataPoints.push_back(data_x[i], data_y[i], (useWeights && data_sigma[i] > 1e-8) ? data_sigma[i] : 1.0);
415 
416  // Set the gsl error handler off as it aborts the program on error.
417  gsl_set_error_handler_off();
418 
419  // Fill in function info
420  gsl_multifit_function_fdf f;
421  f.f = hypFx;
422  f.df = hypJx;
423  f.fdf = hypFJx;
424  f.n = data_x.size();
425  f.p = NUM_HYPERBOLA_PARAMS;
426  f.params = &dataPoints;
427 
428  // Allocate the guess vector
429  gsl_vector * guess = gsl_vector_alloc(NUM_HYPERBOLA_PARAMS);
430  // Make initial guesses - here we just set all parameters to 1.0
431  hypMakeGuess(data_x, data_y, guess);
432  // Create a Levenberg-Marquardt solver with n data points and 4 parameters
433  gsl_multifit_fdfsolver * solver = gsl_multifit_fdfsolver_alloc(gsl_multifit_fdfsolver_lmsder, data_x.size(),
434  NUM_HYPERBOLA_PARAMS);
435  gsl_multifit_fdfsolver_set(solver, & f, guess); // Initialize the solver
436 
437  int status, i = 0;
438 
439  // Iterate to find a result
440  do
441  {
442  i++;
443  // Iterate through the solver. Function returns GSL_SUCCESS = 0 on success
444  status = gsl_multifit_fdfsolver_iterate(solver);
445 
446  //qCDebug(KSTARS_EKOS_FOCUS) << QString("LM solver (Hyperbola): Iteration %1 status=%2 [%3] f=%4")
447  // .arg(i).arg(status).arg(gsl_strerror(status)).arg(gsl_blas_dnrm2(solver->f));
448  if (status)
449  break;
450 
451  status = gsl_multifit_test_delta(solver->dx, solver->x, INEPSABS, INEPSREL);
452  }
453  while (status == GSL_CONTINUE && i < MAX_ITERATIONS);
454 
455  if (status != 0)
456  qCDebug(KSTARS_EKOS_FOCUS) << QString("LM solver (Hyperbola): Failed with status=%1 [%2] after %3/%4 iterations")
457  .arg(status).arg(gsl_strerror(status)).arg(i).arg(MAX_ITERATIONS);
458  else if (i == MAX_ITERATIONS)
459  qCDebug(KSTARS_EKOS_FOCUS) << QString("LM solver (Hyperbola): Failed to converge after %1 iterations").arg(i);
460  else
461  {
462  // All good so store the results - parameters A, B, C and D
463  for (int j = 0; j < NUM_HYPERBOLA_PARAMS; j++)
464  {
465  vc.push_back(gsl_vector_get(solver->x, j));
466  }
467  qCDebug(KSTARS_EKOS_FOCUS) << QString("LM Solver (Hyperbola): Solution found after %1 iterations. A=%2 B=%3, C=%4, D=%5")
468  .arg(i).arg(vc[A_IDX]).arg(vc[B_IDX]).arg(vc[C_IDX]).arg(vc[D_IDX]);
469  }
470 
471  // Free GSL memory
472  gsl_multifit_fdfsolver_free(solver);
473  gsl_vector_free(guess);
474 
475  return vc;
476 }
477 
478 // Initialise parameters before starting the solver
479 void CurveFitting::hypMakeGuess(const QVector<double> inX, const QVector<double> inY, gsl_vector * guess)
480 {
481  if (inX.size() < 1)
482  return;
483 
484  if (!firstSolverRun && (lastCurveType == FOCUS_HYPERBOLA) && (lastCoefficients.size() == NUM_HYPERBOLA_PARAMS))
485  {
486  // Last run of the solver was a Hyperbola and the solution was good, so use that solution
487  gsl_vector_set(guess, A_IDX, lastCoefficients[A_IDX]);
488  gsl_vector_set(guess, B_IDX, lastCoefficients[B_IDX]);
489  gsl_vector_set(guess, C_IDX, lastCoefficients[C_IDX]);
490  gsl_vector_set(guess, D_IDX, lastCoefficients[D_IDX]);
491  }
492  else
493  {
494  // Find min HFD -> good start value for c. b > 0 and b + d > 0
495  int i;
496  double minX, minY;
497 
498  minX = inX[0];
499  minY = inY[0];
500  for(i = 0; i < inX.size(); i++)
501  {
502  if(inY[i] < minY)
503  {
504  minX = inX[i];
505  minY = inY[i];
506  }
507  };
508  gsl_vector_set(guess, A_IDX, 1.0);
509  gsl_vector_set(guess, B_IDX, 1.0);
510  gsl_vector_set(guess, C_IDX, minX);
511  gsl_vector_set(guess, D_IDX, 1.0);
512  }
513 }
514 
515 QVector<double> CurveFitting::parabola_fit(const QVector<double> data_x, const QVector<double> data_y,
516  const QVector<double> data_sigma,
517  bool useWeights)
518 {
519  QVector<double> vc;
520  DataPointT dataPoints;
521 
522  qCDebug(KSTARS_EKOS_FOCUS) <<
523  QString("Starting Levenberg-Marquardt solver, fit=parabola, Iterations= %1, Precision abs/rel=%2/%3...")
524  .arg(MAX_ITERATIONS).arg(INEPSABS).arg(INEPSREL);
525 
526  // Fill in the data to which the curve will be fitted
527  dataPoints.useWeights = useWeights;
528  for (int i = 0; i < data_x.size(); i++)
529  dataPoints.push_back(data_x[i], data_y[i], (useWeights && data_sigma[i] > 1e-8) ? data_sigma[i] : 1.0);
530 
531  // Set the gsl error handler off as it aborts the program on error.
532  gsl_set_error_handler_off();
533 
534  // Fill in function info
535  gsl_multifit_function_fdf f;
536  f.f = parFx;
537  f.df = parJx;
538  f.fdf = parFJx;
539  f.n = data_x.size();
540  f.p = NUM_PARABOLA_PARAMS;
541  f.params = &dataPoints;
542 
543  // Allocate the guess vector
544  gsl_vector * guess = gsl_vector_alloc(NUM_PARABOLA_PARAMS);
545  // Make initial guesses - here we just set all parameters to 1.0
546  parMakeGuess(data_x, data_y, guess);
547  // Create a Levenberg-Marquardt solver with n data points and 4 parameters
548  gsl_multifit_fdfsolver * solver = gsl_multifit_fdfsolver_alloc(gsl_multifit_fdfsolver_lmsder, data_x.size(),
549  NUM_PARABOLA_PARAMS);
550  gsl_multifit_fdfsolver_set(solver, & f, guess); // Initialize the solver
551 
552  int status, i = 0;
553 
554  // Iterate to find a result
555  do
556  {
557  i++;
558  // Iterate through the solver. Function returns GSL_SUCCESS = 0 on success
559  status = gsl_multifit_fdfsolver_iterate(solver);
560 
561  //qCDebug(KSTARS_EKOS_FOCUS) << QString("LM solver (Parabola): Iteration %1 status=%2 [%3] f=%4")
562  // .arg(i).arg(status).arg(gsl_strerror(status)).arg(gsl_blas_dnrm2(solver->f));
563  if (status)
564  break;
565 
566  status = gsl_multifit_test_delta(solver->dx, solver->x, INEPSABS, INEPSREL);
567  }
568  while (status == GSL_CONTINUE && i < MAX_ITERATIONS);
569 
570  if (status != 0)
571  qCDebug(KSTARS_EKOS_FOCUS) << QString("LM solver (Parabola): Failed with status=%1 [%2] after %3/%4 iterations")
572  .arg(status).arg(gsl_strerror(status)).arg(i).arg(MAX_ITERATIONS);
573  else if (i == MAX_ITERATIONS)
574  qCDebug(KSTARS_EKOS_FOCUS) << QString("LM solver (Parabola): Failed to converge after %1 iterations").arg(i);
575  else
576  {
577  // All good so store the results - parameters A, B, C and D
578  for (int j = 0; j < NUM_PARABOLA_PARAMS; j++)
579  {
580  vc.push_back(gsl_vector_get(solver->x, j));
581  }
582  qCDebug(KSTARS_EKOS_FOCUS) << QString("LM Solver (Parabola): Solution found after %1 iterations. A=%2 B=%3, C=%4")
583  .arg(i).arg(vc[A_IDX]).arg(vc[B_IDX]).arg(vc[C_IDX]);
584  }
585 
586  // Free GSL memory
587  gsl_multifit_fdfsolver_free(solver);
588  gsl_vector_free(guess);
589 
590  return vc;
591 }
592 
593 // Initialise parameters before starting the solver
594 void CurveFitting::parMakeGuess(const QVector<double> inX, const QVector<double> inY, gsl_vector * guess)
595 {
596  if (inX.size() < 1)
597  return;
598 
599  if (!firstSolverRun && (lastCurveType == FOCUS_PARABOLA) && (lastCoefficients.size() == NUM_PARABOLA_PARAMS))
600  {
601  // Last run of the solver was a Parabola and that solution was good, so use that solution
602  gsl_vector_set(guess, A_IDX, lastCoefficients[A_IDX]);
603  gsl_vector_set(guess, B_IDX, lastCoefficients[B_IDX]);
604  gsl_vector_set(guess, C_IDX, lastCoefficients[C_IDX]);
605  }
606  else
607  {
608  // Find min HFD -> good start value for c. a and b need to be positive.
609  int i;
610  double minX, minY;
611 
612  minX = inX[0];
613  minY = inY[0];
614  for(i = 0; i < inX.size(); i++)
615  {
616  if(inY[i] < minY)
617  {
618  minX = inX[i];
619  minY = inY[i];
620  }
621  };
622  gsl_vector_set(guess, A_IDX, 1.0);
623  gsl_vector_set(guess, B_IDX, 1.0);
624  gsl_vector_set(guess, C_IDX, minX);
625  }
626 }
627 
628 bool CurveFitting::findMin(double expected, double minPosition, double maxPosition, double *position, double *value,
629  CurveFit curveFit)
630 {
631  bool foundFit;
632  switch (curveFit)
633  {
634  case FOCUS_QUADRATIC :
635  foundFit = minimumQuadratic(expected, minPosition, maxPosition, position, value);
636  break;
637  case FOCUS_HYPERBOLA :
638  foundFit = minimumHyperbola(expected, minPosition, maxPosition, position, value);
639  break;
640  case FOCUS_PARABOLA :
641  foundFit = minimumParabola(expected, minPosition, maxPosition, position, value);
642  break;
643  default :
644  foundFit = false;
645  break;
646  }
647  if (!foundFit)
648  // If we couldn't fit a curve then something's wrong so reset coefficients which will force the next run of the LM solver to start from scratch
649  lastCoefficients.clear();
650  return foundFit;
651 }
652 
653 bool CurveFitting::minimumQuadratic(double expected, double minPosition, double maxPosition, double *position,
654  double *value)
655 {
656  int status;
657  int iter = 0, max_iter = 100;
658  const gsl_min_fminimizer_type *T;
659  gsl_min_fminimizer *s;
660  double m = expected;
661 
662  gsl_function F;
663  F.function = &CurveFitting::curveFunction;
664  F.params = this;
665 
666  // Must turn off error handler or it aborts on error
667  gsl_set_error_handler_off();
668 
669  T = gsl_min_fminimizer_brent;
670  s = gsl_min_fminimizer_alloc(T);
671  status = gsl_min_fminimizer_set(s, &F, expected, minPosition, maxPosition);
672 
673  if (status != GSL_SUCCESS)
674  {
675  qCWarning(KSTARS_EKOS_FOCUS) << "Focus GSL error:" << gsl_strerror(status);
676  return false;
677  }
678 
679  do
680  {
681  iter++;
682  status = gsl_min_fminimizer_iterate(s);
683 
684  m = gsl_min_fminimizer_x_minimum(s);
685  minPosition = gsl_min_fminimizer_x_lower(s);
686  maxPosition = gsl_min_fminimizer_x_upper(s);
687 
688  status = gsl_min_test_interval(minPosition, maxPosition, 0.01, 0.0);
689 
690  if (status == GSL_SUCCESS)
691  {
692  *position = m;
693  *value = curveFunction(m, this);
694  }
695  }
696  while (status == GSL_CONTINUE && iter < max_iter);
697 
698  gsl_min_fminimizer_free(s);
699  return (status == GSL_SUCCESS);
700 }
701 
702 bool CurveFitting::minimumHyperbola(double expected, double minPosition, double maxPosition, double *position,
703  double *value)
704 {
705  Q_UNUSED(expected);
706  if (m_coefficients.size() != NUM_HYPERBOLA_PARAMS)
707  {
708  if (m_coefficients.size() != 0)
709  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::minimumHyperbola coefficients.size()=%1").arg(
710  m_coefficients.size());
711  return false;
712  }
713 
714  double b = m_coefficients[B_IDX];
715  double c = m_coefficients[C_IDX];
716  double d = m_coefficients[D_IDX];
717 
718  // We need to check that the solution found is in the correct form.
719  // Check 1: The hyperbola minimum (=c) lies within the bounds of the focuser (and is > 0)
720  // Check 2: At the minimum position (=c), the value of f(x) (which is the HFR) given by b+d is > 0
721  // Check 3: We have a "u" shaped curve, not an "n" shape. b > 0.
722  if ((c >= minPosition) && (c <= maxPosition) && (b + d > 0.0) && (b > 0.0))
723  {
724  *position = c;
725  *value = b + d;
726  return true;
727  }
728  else
729  return false;
730 }
731 
732 bool CurveFitting::minimumParabola(double expected, double minPosition, double maxPosition, double *position, double *value)
733 {
734  Q_UNUSED(expected);
735  if (m_coefficients.size() != NUM_PARABOLA_PARAMS)
736  {
737  if (m_coefficients.size() != 0)
738  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::minimumParabola coefficients.size()=%1").arg(
739  m_coefficients.size());
740  return false;
741  }
742 
743  double a = m_coefficients[A_IDX];
744  double b = m_coefficients[B_IDX];
745  double c = m_coefficients[C_IDX];
746 
747  // We need to check that the solution found is in the correct form.
748  // Check 1: The parabola minimum (=c) lies within the bounds of the focuser (and is > 0)
749  // Check 2: At the minimum position (=c), the value of f(x) (which is the HFR) given by a is > 0
750  // Check 3: We have a "u" shaped curve, not an "n" shape. b > 0.
751  if ((c >= minPosition) && (c <= maxPosition) && (a > 0.0) && (b > 0.0))
752  {
753  *position = c;
754  *value = a;
755  return true;
756  }
757  else
758  return false;
759 }
760 
761 // R2 (R squared) is the coefficient of determination gives a measure of how well the curve fits the datapoints.
762 // It lies between 0 and 1. 1 means that all datapoints will lie on the curve which therefore exactly describes the
763 // datapoints. 0 means that there is no correlation between the curve and the datapoints.
764 // See www.wikipedia.org/wiki/Coefficient_of_determination for more details.
765 double CurveFitting::calculateR2(CurveFit curveFit)
766 {
767  double R2 = 0.0;
768  QVector<double> curvePoints;
769  int i;
770  switch (curveFit)
771  {
772  case FOCUS_QUADRATIC :
773  // Not implemented
774  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::calculateR2 called for Quadratic");
775  break;
776 
777  case FOCUS_HYPERBOLA :
778  // Calculate R2 for the hyperbola
779  if (m_coefficients.size() != NUM_HYPERBOLA_PARAMS)
780  {
781  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::calculateR2 hyperbola coefficients.size()=").arg(
782  m_coefficients.size());
783  break;
784  }
785 
786  for (i = 0; i < m_y.size(); i++)
787  // Load up the curvePoints vector
788  curvePoints.push_back(hypfx(m_x[i], m_coefficients[A_IDX], m_coefficients[B_IDX], m_coefficients[C_IDX],
789  m_coefficients[D_IDX]));
790  // Do the actual R2 calculation
791  R2 = calcR2(m_y, curvePoints);
792  break;
793 
794  case FOCUS_PARABOLA :
795  // Calculate R2 for the parabola
796  if (m_coefficients.size() != NUM_PARABOLA_PARAMS)
797  {
798  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::calculateR2 parabola coefficients.size()=").arg(
799  m_coefficients.size());
800  break;
801  }
802 
803  for (i = 0; i < m_y.size(); i++)
804  // Load up the curvePoints vector
805  curvePoints.push_back(parfx(m_x[i], m_coefficients[A_IDX], m_coefficients[B_IDX], m_coefficients[C_IDX]));
806 
807  // Do the actual R2 calculation
808  R2 = calcR2(m_y, curvePoints);
809  break;
810 
811  default :
812  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::calculateR2 curveFit=%1").arg(curveFit);
813  break;
814  }
815  // R2 for linear function range 0<=R2<=1. For non-linear functions it is possible
816  // for R2 to be negative. This doesn't really mean anything so force 0 in these situations.
817  return std::max(R2, 0.0);
818 }
819 
820 // Do the maths to calculate R2 - how well the curve fits the datapoints
821 double CurveFitting::calcR2(QVector<double> dataPoints, QVector<double> curvePoints)
822 {
823  double R2 = 0.0, chisq = 0.0, sum = 0.0, totalSumSquares = 0.0, average;
824  int i;
825 
826  // Calculate R2 for the hyperbola
827  for (i = 0; i < dataPoints.size(); i++)
828  {
829  sum += dataPoints[i];
830  chisq += pow((dataPoints[i] - curvePoints[i]), 2.0);
831  }
832  average = sum / dataPoints.size();
833 
834  for (i = 0; i < dataPoints.size(); i++)
835  {
836  totalSumSquares += pow((dataPoints[i] - average), 2.0);
837  }
838 
839  if (totalSumSquares > 0.0)
840  R2 = 1 - (chisq / totalSumSquares);
841  else
842  {
843  R2 = 0.0;
844  qCDebug(KSTARS_EKOS_FOCUS) << QString("Error in CurveFitting::calcR2 tss=%1").arg(totalSumSquares);
845  }
846  // R2 for linear function range 0<=R2<=1. For non-linear functions it is possible
847  // for R2 to be negative. This doesn't really mean anything so force 0 in these situations.
848  return std::max(R2, 0.0);
849 }
850 } // namespace
851 
int size() const const
Ekos is an advanced Astrophotography tool for Linux. It is based on a modular extensible framework to...
Definition: align.cpp:70
void clear()
void push_back(const T &value)
NETWORKMANAGERQT_EXPORT NetworkManager::Status status()
void push_back(QChar ch)
QString arg(qlonglong a, int fieldWidth, int base, QChar fillChar) const const
int size() const const
This file is part of the KDE documentation.
Documentation copyright © 1996-2022 The KDE developers.
Generated on Mon Aug 15 2022 04:04:00 by doxygen 1.8.17 written by Dimitri van Heesch, © 1997-2006

KDE's Doxygen guidelines are available online.