ALGLIB is a free software which is distributed under a GPL license - version 2 or (at your option) any later version. A copy of the GNU General Public License is available at http://www.fsf.org/licensing/licenses
This reference manual is licensed under BSD-like documentation license:
Copyright 1994-2009 Sergey Bochkanov, ALGLIB Project. All rights reserved.
Redistribution and use of this document (ALGLIB Reference Manual) with or without modification, are permitted provided that such redistributions will retain the above copyright notice, this condition and the following disclaimer as the first (or last) lines of this file.
THIS DOCUMENTATION IS PROVIDED BY THE ALGLIB PROJECT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE ALGLIB PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
ALGLIB Project provides two sources of information: ALGLIB Reference Manual (this document) and ALGLIB User Guide.
ALGLIB Reference Manual contains full description of all publicly accessible ALGLIB units accompanied with examples. Reference Manual is focused on the source code: it documents units, functions, structures and so on. If you want to know what unit YYY
can do or what subroutines unit ZZZ
contains Reference Manual is a place to go. Free software needs free documentation - that's why ALGLIB Reference Manual is licensed under BSD-like documentation license.
Additionally to the Reference Manual we provide you User Guide. User Guide is focused on more general questions: how fast ALGLIB is? how reliable it is? what are the strong and weak sides of the algorithms used? We aim to make ALGLIB User Guide an important source of information both about ALGLIB and numerical analysis algorithms in general. We want it to be a book about algorithms, not just software documentation. And we want it to be unique - that's why ALGLIB User Guide is distributed under less-permissive personal-use-only license.
ALGLIB was not possible without the contribution of next open source projects:
ALGLIB has a script-based compilation system. 'Script-based' means that you don't have to use configure
or make
to compile ALGLIB. ALGLIB distribution contains Bash scripts and BAT-files (for Windows users) which will compile and test ALGLIB for you. Each script (build
/check
/...) is provided in two identical versions: as a Bash script and as a Windows batch file. The only difference is a set of compilers script supports.
If you are *nix user, your environment is ready for the ALGLIB compilation. If you are Windows user, make sure that your compiler is in your PATH. MSVC users have to execute vcvars32.bat
, vcvarsx86_amd64.bat
or vcvarsx86_ia64.bat
(depending on the hardware they use) in a shell window. These scripts are located in the MS SDK or MS Visual Studio directories.
To compile ALGLIB you just need to cd
into ALGLIB directory and to execute compilation script ./build
(or build.bat
for Windows users). You must specify compiler name and you can specify additional compiler parameters. Multiple parameters must be enclosed in double quotes.
./build | returns full list of compilers supported |
./build gcc | compilation using GCC |
build.bat msvc | compilation using MSVC |
./build gcc -m32 | one additonal parameter, no quotes |
./build gcc "-m32 -march=pentium4 -mfpmath=sse -O3" | multiple parameters in double quotes |
build.bat msvc "/O2 /fp:strict" | multiple parameters in double quotes |
./build gcc -m32 -march=pentium4 -mfpmath=sse -O3 | error: no double quotes |
build.bat msvc /O2 /fp:strict | error: no double quotes |
Successful compilation will be completely silent. Compiler messages will be redirected to the log.txt
in the ALGLIB root directory. In the case of the error you will get short message, but the most detailed info will be in a log file. After compilation is done, header files and libalglib
static library will be copied to the out
folder of the ALGLIB root directory. ALGLIB is ready to use!
NOTE 1: for *nix users. ALGLIB compilation system does not contain something similar to make install
. You can setup additional search paths so your compiler will know where it can find libalglib.a
and headers.
NOTE 2: You can always compile ALGLIB yourself, without script-based compilation system. Just pass all cpp
files from src
directory to the compiler. ALGLIB is fairly portable and it does not reqire sophisticated compiler settings.
After ALGLIB compilation it can be tested:
check.bat | short help |
check.bat msvc all | all the library is tested |
check.bat msvc all_silent | silent mode, only errors are echo'ed |
./check gcc fft | fft.cpp is tested |
./check gcc fft "-m32 -O3" | custom parameters are passed |
Some units are accompanied by examples which can be executed by you. Example sources are stored in ./examples
directory. They can be modified - feel free to experiment with them.
example.bat list autogk | list examples for autogk.cpp unit |
example.bat view autogk_smooth | view example source |
./example gcc autogk_smooth | execute example |
./example gcc autogk_smooth "-m32 -O3" | custom parameters are passed |
stdafx.h
here for?stdafx.h
here for?
MSVC and some other compilers require the #include <stdafx.h>
directive in the program code to manage precompiled headers, and create the stfafx.h file when generating a new project. However, some compilers (e.g. BCB) use other tools to manage precompiled headers. In this case the #include <stdafx.h>
directive as such doesn't hinder their operation, however if the file with this name is absent, a compilation error occurs. The blank file called stfafx.h
is created to avoid this error. If your development environment already created the file, leave it unchanged.
Imagine you are addressing the a matrix element in the common notation: a[x][y]
instead of a(x,y)
. Actually there are two index operators called here instead of one. The first is indexing of a matrix by x
, that returns a reference to a temporary structure that describes the matrix row. The second is indexing that temporary structure by y
, which returns the reference to the needed element. Addressing through overloading the round brackets is much more effective, as no temporary structures are required.
AP library is a generic name for a set of libraries in several programming languages performing low-level tasks depending on specific programming languages. The AP library carries out tasks such as working with dynamic one- and multidimensional arrays in languages which do not support this data type, contains implementation of basic linear algebra algorithms, etc. The library is distributed as source codes under GPL 2+ license (GPL 2 or later). The library is attached to the ALGLIB package.
Optimization, integration and other similar methods are united by one common trait. They need to have a way of calculating the meaning of a function defined by the user at a point defined by the method.
The most convenient way of solving this problem is transferring a function pointer into the module. However bear in mind that ALGLIB package is written using pseudocode that is automatically translated into different programming languages. While each language has its own function pointer analog that is often different from other languages. When the ALGLIB pseudocode was developed, at some point is became clear that adding function pointers in it will be very complex as this feature is implemented differently in every language. This is why reverse communication was chosen as a different kind of solution.
It is aimed at creating a convenient and efficient multilingual scientific software library.
The ALGLIB package:
AlgoPascal is a programming language, designed particularly for this project. The programs, written in this language, are processed by an automatic translator and translated into other programming languages. Almost all ALGLIB source is produced by the AlgoPascal translator.
ap::ap_error
classap::template_1d_array
classap::template_2d_array
classap::complex
class
The document describes a C++ version of the AP library. The AP library for C++ contains a basic set of mathematical functions and classes needed to compile ALGLIB package. The library includes the only module ap.cpp
.
AP_ASSERT
This symbol enables checking of the array boundaries. If it is set by the "define" directive, then at each addressing to the dynamic array elements, the transferred index is verified for its correctness. In case of error the ap::ap_error
exception is thrown. Checking the array boundaries makes the program more reliable, but slows down the program operation.
NO_AP_ASSERT
This symbol disables checking of the array boundaries. If it is set by the "define" directive, then the index being outside the array boundaries is not checked when the dynamic array elements are addressed.
ap::machineepsilon
The constant represents the accuracy of machine operations times some small number r>1.
ap::maxrealnumber
The constant represents the highest value of the positive real number, which could be represented on this machine. The constant may be taken "oversized", that is real boundary can be even higher.
ap::minrealnumber
The constant represents the lowest value of positive real number, which could be represented on this machine. The constant may be taken "oversized", that is real boundary can be even lower.
int ap::sign(double x)
Returns:
+1, if X>0
-1, if X<0
0, if X=0
double ap::randomreal()
Returns a random real number from half-interval [0,1).
int ap::randominteger(int maxv)
Returns a random integer between 0 and maxv-1.
double ap::round(double x)
Returns the nearest integer to x. If x is right in the middle between two integers, then the function result depends on the implementation.
double ap::trunc(double x)
Truncates the fractional part of x.
trunc(1.3) = 1
trunc(-1.3)= -1
double ap::pi()
Returns the constant π
double ap::sqr(double x)
Returns x2.
double ap::maxreal(double m1, double m2)
Returns the maximum of two real numbers.
double ap::minreal(double m1, double m2)
Returns the minimum of two real numbers.
int ap::maxint(int m1, int m2)
Returns the maximum of two integers.
int ap::minint(int m1, int m2)
Returns the minimum of two integers.
ap::ap_error
class
This is a class of exception which is thrown when different errors occur in the AP library, for example - if the array index is found incorrect when the array boundaries check is enabled. Class contains one member - msg
field which may contain additional information in textual form.
First we will discuss general principles of working with array classes, then describe the classes and their methods.
Classes of the standard library allow operations with matrixes and vectors (one-dimensional and two-dimensional arrays) of variable size and with variable numeration of elements, that is, the array numeration can start at any number, end at any number and change dynamically. Because the array classes are templates, the arrays of the same dimension have the same set of member functions. And as the member functions of arrays with different dimensions differ only by the number of arguments, there is little difference between two-dimensional and one-dimenstional arrays.
Working with an array starts with the array creation. You should distinguish the creation of array class instance and the memory allocation for the array. When creating the class instance, you can use constructor without any parameters, that creates an empty array without any elements, or you can use copy and assignment constructors that copy one array into another. In case the array is created by the default constructor, it contains no elements and an attempt to address them may cause the program failure. If, during the copy operation, the source array has no memory allocated for the array elements, destination array will contain no elements either. If the source array has memory allocated for its elements, destination array will allocate the same amount of memory and copy the elements there. That is, the copy operation yields into two independent arrays with indentical contents.
After an empty array has been created, you should allocate the memory for its elements, using the setlength
method. The content of the created array elements is not defined. If the setlength
method is called for the array with already allocated memory, then, after changing its parameters, the newly allocated elements also become undefined and the old content is destroyed.
To address the array elements, an overloaded operator()
is used. That is, the code addressing the element of array a
with indexes a(i,j,k)
will look like a(i,j,k)
. Below is given an example of factorial array calculation, illustrating the work with arrays.
integer_1d_array factarr(int n) { integer_1d_array result; result.setbounds(1,n); result(1) = 1; for(int i=2; i<=n; i++) result(i) = result(i-1)*i; return result; }
ap::template_1d_array
classThis class is a template of dynamical one-dimensional array with variable upper and lower boundaries. Based on this class, the following classes are constructed:
typedef template_1d_array<int> integer_1d_array; typedef template_1d_array<double> real_1d_array; typedef template_1d_array<bool> boolean_1d_array; typedef template_1d_array<complex> complex_1d_array;
template_1d_array()
Constructor. Creates an empty array.
~template_1d_array()
Destructor. Frees memory, which had been allocated for the array.
template_1d_array(const template_1d_array &rhs)
Copy constructor. Allocates the separate storage and copies source array content there.
const template_1d_array& operator=(const template_1d_array &rhs)
Assignment constructor. Deletes destination array content, frees allocated memory, then allocates a separate storage and copies source array content there.
T& operator()(int i)
Addressing the i-th array element.
void setbounds(int iLow, int iHigh)
Memory allocation for the array. Deletes the array content, frees allocated memory, then allocates a separate storage for iHigh-iLow+1 elements.
The elements numeration in the new array starts from iLow
and ends at iHigh
. The content of the new array is not defined.
void setlength(int iLen)
Memory allocation for the array. Deletes the array content, frees allocated memory, then allocates a separate storage for iLen
elements.
The elements numeration in the new array starts from zero. The content of the new array is not defined.
void setcontent(int iLow, int iHigh, const T *pContent)
The method is similar to the setbounds()
method, but after allocating a memory for a destination array it copies the content of pContent[]
there.
T* getcontent()
const T* getcontent() const
Returns pointer to the array. The data pointed by the returned pointer can be changed, and the array content will be changed as well.
int getlowbound()
int gethighbound()
Get lower and upper boundaries.
raw_vector<T> getvector(int iStart, int iEnd)
The method is used by the basic subroutines of linear algebra to get access to the internal memory of the array. The method returns an object, holding the pointer to a vector part (starting from the element with iStart
index value and finishing with iEnd
index value). If iEnd<iStart, then an empty vector is considered to be set.
const_raw_vector<T> getvector(int iStart, int iEnd) const
The method is used by the basic subroutines of linear algebra to get access to the internal memory of the array in the read only mode. The method returns an object, holding the pointer to a vector part (starting from the element with iStart
index value and finishing with iEnd
index value). If iEnd<iStart, then an empty vector is considered to be set.
ap::template_2d_array
classThis class is a template of dynamical two-dimensional array with variable upper and lower boundaries. Based on this class, the following classes are constructed:
typedef template_2d_array<int> integer_2d_array; typedef template_2d_array<double> real_2d_array; typedef template_2d_array<bool> boolean_2d_array; typedef template_2d_array<complex> complex_2d_array;
template_2d_array()
Constructor. Creates an empty array.
~template_2d_array()
Destructor. Frees memory, which had been allocated for the array.
template_2d_array(const template_2d_array &rhs)
Copy constructor. Allocates the separate storage and copies source array content there.
const template_2d_array& operator=(const template_2d_array &rhs)
Assignment constructor. Deletes destination array content, frees allocated memory, then allocates a separate storage and copies source array content there.
T& operator()(int i1, int i2)
const T& operator()(int i1, int i2) const
Array element access.
void setbounds(int iLow1, int iHigh1, int iLow2, int iHigh2)
Memory allocation for the array . Deletes the array content, frees allocated memory, then allocates a separate storage for (iHigh1-iLow1+1)*(iHigh2-iLow2+1) elements.
The elements numeration in the new array starts from iLow1
and finishes at iHigh1
for the first dimension, and similarly for the second dimension.
The content of the new array is not defined.
void setlength(int iLen1, int iLen2)
Same as setbounds
. but make zero-based array allocation.
void setcontent(int iLow1, int iHigh1, int iLow2, int iHigh2, const T *pContent)
The method is similar to the setbounds() method, but after allocating a memory for a destination array it copies the content of pContent[] there.
The pContent array contains two-dimensional array, written in line, that is, the first element is [iLow1, iLow2], then goes [iLow1, iLow2+1], and so on.
int getlowbound(int iBoundNum)
int gethighbound(int iBoundNum)
Get lower and upper boundaries of one-dimensional array with number iBoundNum.
raw_vector
const_raw_vector
The iColumn parameter must be the valid column number (that is be within the boundaries of the array). If iRowEnd<iRowStart, then an empty column is considered to be set.
raw_vector
const_raw_vector
The iRow parameter must be the valid line number (that is be within the boundaries of the array). If iColumnEnd<iColumnStart, then an empty line is considered to be set.
int getstride() const
Returns stride (in bytes), i.e. span between first elements of adjacent rows.
Basic subroutines of linear algebra included into the AP library are close by their functions to the Level 1 BLAS, allowing to perform the simplest operations with vectors and with the matrix lines and columns.
Subroutines should be used in the following way. First you need to get an object of the raw_vector
type or const_raw_vector
type, pointing to the part of the matrix or array being processed using the methods getcolumn
/getrow
(for the matrix), or getvector
(for the array). The object holds the pointer for the line (or column) start, the number of elements in the processed line (column), and the interval between the two adjacent elements. When using a standard scheme for matrixes storage in the memory (that is, by lines), the interval between the elements of one line equals 1, and the interval between the adjacent elements of one column equals the number of columns. The received object is transferred as argument to the corresponding subroutine, which performs operations on the matrix part pointed by the internal object pointer.
Below is given the list of basic subroutines of linear algebra, available in the AP library.
template<class T> T vdotproduct(const_raw_vector<T> v1, const_raw_vector<T> v2)
The subroutine calculates the scalar product of transferred vectors.
template<class T> void vmove(raw_vector<T> vdst, const_raw_vector<T> vsrc)
template<class T> void vmoveneg(raw_vector<T> vdst, const_raw_vector<T> vsrc)
template<class T, class T2> void vmove(raw_vector<T> vdst, const_raw_vector<T> vsrc, T2 alpha)
This subroutine set is used to copy one vector content to another vector using different methods: simple copy, copy multiplied by -1, copy multiplied by a number.
template<class T> void vadd(raw_vector<T> vdst, const_raw_vector<T> vsrc)
template<class T, class T2> void vadd(raw_vector<T> vdst, const_raw_vector<T> vsrc, T2 alpha)
This subroutine set is used to add one vector to another using different methods: simple addition or addition multiplied by a number.
template<class T> void vsub(raw_vector<T> vdst, const_raw_vector<T> vsrc)
template<class T, class T2> void vsub(raw_vector<T> vdst, const_raw_vector<T> vsrc, T2 alpha)
This subroutine set is used to subtract one vector from another using different methods: simple subtraction or subtraction multiplied by a number.
template<class T, class T2> void vmul(raw_vector<T> vdst, T2 alpha)
Multiplies vector by a number and stores the result in the same place.
If both operands are vectors/rows with interval between the elements equals 1 and length equals N, alternative syntax can be used.
template<class T> T vdotproduct(const T *v1, const T *v2, int N)
template<class T> void vmove(T *vdst, const T *vsrc, int N)
template<class T> void vmoveneg(T *vdst, const T *vsrc, int N)
template<class T, class T2> void vmove(T *vdst, const T *vsrc, int N, T2 alpha)
template<class T> void vadd(T *vdst, const T *vsrc, int N)
template<class T, class T2> void vadd(T *vdst, const T *vsrc, int N, T2 alpha)
template<class T> void vsub(T *vdst, const T *vsrc, int N)
template<class T, class T2> void vsub(T *vdst, const T *vsrc, int N, T2 alpha)
template<class T, class T2> void vmul(T *vdst, int N, T2 alpha)
ap::complex
class
AP library includes the ap::complex
class that allows operations with compex numbers. Access to real and imaginary parts of complex number is implemented through the public fields x
and y
. Arithmetical operations are supported, the same as with embedded data types, by overloading of operations: addition, subtraction, multiplication and division. Addition, subtraction and multiplication are performed by a usual way (i.e., according to their definition which can be found in any textdook in algebra), division is performed using so called "safe" algorithm that could never cause overflow when calculating intermediate results. The library also includes several functions performing elementary operations with complex numbers.
const double abscomplex(const ap::complex &z)
Returns the modulus of complex number z. It should be noted that the modulus calculation is performed using so called "safe" algorithm, that could never cause overflow when calculating intermediate results.
const ap::complex conj(const ap::complex &z)
Returns complex conjugate of z.
const ap::complex csqr(const ap::complex &z)
Returns the square of z.
DataAnalysis package | ||
dforest | Decision forest classifier (regression model) | |
kmeans | K-means++ clustering | |
lda | Linear discriminant analysis | |
linreg | Linear models | |
logit | Logit models | |
mlpbase | Basic neural network operations | |
mlpe | Neural network ensemble models | |
mlptrain | Neural network training | |
pca | Principal component analysis | |
DiffEquations package | ||
odesolver | Ordinary differential equation solver | |
FastTransforms package | ||
conv | Fast real/complex convolution | |
corr | Fast real/complex cross-correlation | |
fft | Real/complex FFT | |
fht | Real Fast Hartley Transform | |
Integration package | ||
autogk | Adaptive 1-dimensional integration | |
gkq | Gauss-Kronrod quadrature generator | |
gq | Gaussian quadrature generator | |
Interpolation package | ||
idwint | Inverse distance weighting: interpolation/fitting | |
lsfit | Linear and nonlinear least-squares solvers | |
polint | Polynomial interpolation/fitting | |
pspline | Parametric spline interpolation | |
ratint | Rational interpolation/fitting | |
spline1d | 1D spline interpolation/fitting | |
spline2d | 2D spline interpolation | |
LinAlg package | ||
ablas | Level 2 and Level 3 BLAS operations | |
bdsvd | Bidiagonal SVD | |
evd | Eigensolvers | |
inverseupdate | Sherman-Morrison update of the inverse matrix | |
ldlt | LDLT decomposition | |
matdet | Determinant calculation | |
matgen | Random matrix generation | |
matinv | Matrix inverse | |
ortfac | Real/complex QR, LQ, bi(tri)diagonal, Hessenberg decompositions | |
rcond | Condition number estimate | |
schur | Schur decomposition | |
sdet | Determinant of a symmetric matrix | |
sinverse | Symmetric inversion | |
spdgevd | Generalized symmetric eigensolver | |
srcond | Condition number estimate for symmetric matrices | |
svd | Singular value decomposition | |
trfac | LU and Cholesky decompositions | |
Optimization package | ||
minasa | ASA bound constrained optimizer | |
mincg | Conjugate gradient optimizer | |
minlbfgs | Limited memory BFGS optimizer | |
minlm | Improved Levenberg-Marquardt optimizer | |
Other package | ||
nearestneighbor | Nearest neighbor search: approximate and exact | |
Solvers package | ||
densesolver | Dense linear system solver | |
ssolve | Symmetric dense linear system solver | |
SpecialFunctions package | ||
airyf | Airy functions | |
bessel | Bessel functions | |
betaf | Beta function | |
chebyshev | Chebyshev polynomials | |
dawson | Dawson integral | |
elliptic | Elliptic integrals | |
expintegrals | Exponential integrals | |
fresnel | Fresnel integrals | |
gammafunc | Gamma function | |
hermite | Hermite polynomials | |
ibetaf | Incomplete beta function | |
igammaf | Incomplete gamma function | |
jacobianelliptic | Jacobian elliptic functions | |
laguerre | Laguerre polynomials | |
legendre | Legendre polynomials | |
psif | Psi function | |
trigintegrals | Trigonometric integrals | |
Statistics package | ||
binomialdistr | Binomial distribution | |
chisquaredistr | Chi-Square distribution | |
correlation | Pearson/Spearman correlation coefficients | |
correlationtests | Hypothesis testing: correlation tests | |
descriptivestatistics | Descriptive statistics: mean, variance, etc. | |
fdistr | F-distribution | |
hqrnd | High quality random numbers generator | |
jarquebera | Hypothesis testing: Jarque-Bera test | |
mannwhitneyu | Hypothesis testing: Mann-Whitney-U test | |
normaldistr | Normal distribution | |
poissondistr | Poisson distribution | |
stest | Hypothesis testing: sign test | |
studenttdistr | Student's t-distribution | |
studentttests | Hypothesis testing: Student's t-test | |
variancetests | Hypothesis testing: F-test and one-sample variance test | |
wsr | Hypothesis testing: Wilcoxon signed rank test | |
ablas
unitablasblocksize
function/************************************************************************* Returns block size - subdivision size where cache-oblivious soubroutines switch to the optimized kernel. INPUT PARAMETERS A - real matrix, is passed to ensure that we didn't split complex matrix using real splitting subroutine. matrix itself is not changed. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/int ablasblocksize(const ap::real_2d_array& a);
ablascomplexblocksize
function/************************************************************************* Block size for complex subroutines. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/int ablascomplexblocksize(const ap::complex_2d_array& a);
ablascomplexsplitlength
function/************************************************************************* Complex ABLASSplitLength -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void ablascomplexsplitlength(const ap::complex_2d_array& a, int n, int& n1, int& n2);
ablasmicroblocksize
function/************************************************************************* Microblock size -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/int ablasmicroblocksize();
ablassplitlength
function/************************************************************************* Splits matrix length in two parts, left part should match ABLAS block size INPUT PARAMETERS A - real matrix, is passed to ensure that we didn't split complex matrix using real splitting subroutine. matrix itself is not changed. N - length, N>0 OUTPUT PARAMETERS N1 - length N2 - length N1+N2=N, N1>=N2, N2 may be zero -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void ablassplitlength(const ap::real_2d_array& a, int n, int& n1, int& n2);
cmatrixcopy
function/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void cmatrixcopy(int m, int n, const ap::complex_2d_array& a, int ia, int ja, ap::complex_2d_array& b, int ib, int jb);
cmatrixgemm
function/************************************************************************* This subroutine calculates C = alpha*op1(A)*op2(B) +beta*C where: * C is MxN general matrix * op1(A) is MxK matrix * op2(B) is KxN matrix * "op" may be identity transformation, transposition, conjugate transposition Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. INPUT PARAMETERS N - matrix size, N>0 M - matrix size, N>0 K - matrix size, K>0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition B - matrix IB - submatrix offset JB - submatrix offset OpTypeB - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition Beta - coefficient C - matrix IC - submatrix offset JC - submatrix offset -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixgemm(int m, int n, int k, ap::complex alpha, const ap::complex_2d_array& a, int ia, int ja, int optypea, const ap::complex_2d_array& b, int ib, int jb, int optypeb, ap::complex beta, ap::complex_2d_array& c, int ic, int jc);
cmatrixlefttrsm
function/************************************************************************* This subroutine calculates op(A^-1)*X where: * X is MxN general matrix * A is MxM upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. Cache-oblivious algorithm is used. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+M-1,J1:J1+M-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition C - matrix, actial matrix is stored in C[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixlefttrsm(int m, int n, const ap::complex_2d_array& a, int i1, int j1, bool isupper, bool isunit, int optype, ap::complex_2d_array& x, int i2, int j2);
cmatrixmv
function/************************************************************************* Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) M>=0 N - number of columns of op(A) N>=0 A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T * OpA=2 => op(A) = A^H X - input vector IX - subvector offset IY - subvector offset OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/void cmatrixmv(int m, int n, ap::complex_2d_array& a, int ia, int ja, int opa, ap::complex_1d_array& x, int ix, ap::complex_1d_array& y, int iy);
cmatrixrank1
function/************************************************************************* Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/void cmatrixrank1(int m, int n, ap::complex_2d_array& a, int ia, int ja, ap::complex_1d_array& u, int iu, ap::complex_1d_array& v, int iv);
cmatrixrighttrsm
function/************************************************************************* This subroutine calculates X*op(A^-1) where: * X is MxN general matrix * A is NxN upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. Cache-oblivious algorithm is used. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+N-1,J1:J1+N-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition C - matrix, actial matrix is stored in C[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixrighttrsm(int m, int n, const ap::complex_2d_array& a, int i1, int j1, bool isupper, bool isunit, int optype, ap::complex_2d_array& x, int i2, int j2);
cmatrixsyrk
function/************************************************************************* This subroutine calculates C=alpha*A*A^H+beta*C or C=alpha*A^H*A+beta*C where: * C is NxN Hermitian matrix given by its upper/lower triangle * A is NxK matrix when A*A^H is calculated, KxN matrix otherwise Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. INPUT PARAMETERS N - matrix size, N>=0 K - matrix size, K>=0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - multiplication type: * 0 - A*A^H is calculated * 2 - A^H*A is calculated Beta - coefficient C - matrix IC - submatrix offset JC - submatrix offset IsUpper - whether C is upper triangular or lower triangular -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixsyrk(int n, int k, double alpha, const ap::complex_2d_array& a, int ia, int ja, int optypea, double beta, ap::complex_2d_array& c, int ic, int jc, bool isupper);
cmatrixtranspose
function/************************************************************************* Cache-oblivous complex "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) A - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void cmatrixtranspose(int m, int n, const ap::complex_2d_array& a, int ia, int ja, ap::complex_2d_array& b, int ib, int jb);
rmatrixcopy
function/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void rmatrixcopy(int m, int n, const ap::real_2d_array& a, int ia, int ja, ap::real_2d_array& b, int ib, int jb);
rmatrixgemm
function/************************************************************************* Same as CMatrixGEMM, but for real numbers. OpType may be only 0 or 1. -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixgemm(int m, int n, int k, double alpha, const ap::real_2d_array& a, int ia, int ja, int optypea, const ap::real_2d_array& b, int ib, int jb, int optypeb, double beta, ap::real_2d_array& c, int ic, int jc);
rmatrixlefttrsm
function/************************************************************************* Same as CMatrixLeftTRSM, but for real matrices OpType may be only 0 or 1. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixlefttrsm(int m, int n, const ap::real_2d_array& a, int i1, int j1, bool isupper, bool isunit, int optype, ap::real_2d_array& x, int i2, int j2);
rmatrixmv
function/************************************************************************* Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) N - number of columns of op(A) A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T X - input vector IX - subvector offset IY - subvector offset OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/void rmatrixmv(int m, int n, ap::real_2d_array& a, int ia, int ja, int opa, ap::real_1d_array& x, int ix, ap::real_1d_array& y, int iy);
rmatrixrank1
function/************************************************************************* Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/void rmatrixrank1(int m, int n, ap::real_2d_array& a, int ia, int ja, ap::real_1d_array& u, int iu, ap::real_1d_array& v, int iv);
rmatrixrighttrsm
function/************************************************************************* Same as CMatrixRightTRSM, but for real matrices OpType may be only 0 or 1. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixrighttrsm(int m, int n, const ap::real_2d_array& a, int i1, int j1, bool isupper, bool isunit, int optype, ap::real_2d_array& x, int i2, int j2);
rmatrixsyrk
function/************************************************************************* Same as CMatrixSYRK, but for real matrices OpType may be only 0 or 1. -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixsyrk(int n, int k, double alpha, const ap::real_2d_array& a, int ia, int ja, int optypea, double beta, ap::real_2d_array& c, int ic, int jc, bool isupper);
rmatrixtranspose
function/************************************************************************* Cache-oblivous real "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) A - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void rmatrixtranspose(int m, int n, const ap::real_2d_array& a, int ia, int ja, ap::real_2d_array& b, int ib, int jb);
airyf
unitairy
function/************************************************************************* Airy function Solution of the differential equation y"(x) = xy. The function returns the two independent solutions Ai, Bi and their first derivatives Ai'(x), Bi'(x). Evaluation is by power series summation for small x, by rational minimax approximations for large x. ACCURACY: Error criterion is absolute when function <= 1, relative when function > 1, except * denotes relative error criterion. For large negative x, the absolute error increases as x^1.5. For large positive x, the relative error increases as x^1.5. Arithmetic domain function # trials peak rms IEEE -10, 0 Ai 10000 1.6e-15 2.7e-16 IEEE 0, 10 Ai 10000 2.3e-14* 1.8e-15* IEEE -10, 0 Ai' 10000 4.6e-15 7.6e-16 IEEE 0, 10 Ai' 10000 1.8e-14* 1.5e-15* IEEE -10, 10 Bi 30000 4.2e-15 5.3e-16 IEEE -10, 10 Bi' 30000 4.9e-15 7.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/void airy(double x, double& ai, double& aip, double& bi, double& bip);
autogk
unitautogkreport
structure/************************************************************************* Integration report: * TerminationType = completetion code: * -5 non-convergence of Gauss-Kronrod nodes calculation subroutine. * -1 incorrect parameters were specified * 1 OK * Rep.NFEV countains number of function calculations * Rep.NIntervals contains number of intervals [a,b] was partitioned into. *************************************************************************/struct autogkreport { int terminationtype; int nfev; int nintervals; };
autogkstate
structure/************************************************************************* This structure stores internal state of the integration algorithm between subsequent calls of the AutoGKIteration() subroutine. *************************************************************************/struct autogkstate { double a; double b; double alpha; double beta; double xwidth; double x; double xminusa; double bminusx; double f; int wrappermode; autogkinternalstate internalstate; ap::rcommstate rstate; double v; int terminationtype; int nfev; int nintervals; };
autogkiteration
function/************************************************************************* One step of adaptive integration process. Called after initialization with one of AutoGKXXX subroutines. See HTML documentation for examples. Input parameters: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with one of AutoGKXXX subroutines. If suborutine returned False, iterative proces has converged. If subroutine returned True, caller should calculate function value State.F at State.X and call AutoGKIteration again. NOTE: When integrating "difficult" functions with integrable singularities like F(x) = (x-A)^alpha * (B-x)^beta subroutine may require the value of F at points which are too close to A/B. Sometimes to calculate integral with high enough precision we may need to calculate F(A+delta) when delta is less than machine epsilon. In finite precision arithmetics A+delta will be effectively equal to A, so we may find us in situation when we are trying to calculate something like 1/sqrt(1-1). To avoid such situations, AutoGKIteration subroutine fills not only State.X field, but also State.XMinusA (which equals to X-A) and State.BMinusX (which equals to B-X) fields. If X is too close to A or B (X-A<0.001*A, or B-X<0.001*B, for example) use these fields instead of State.X -- ALGLIB -- Copyright 07.05.2009 by Bochkanov Sergey *************************************************************************/bool autogkiteration(autogkstate& state);
Examples: autogk_singular autogk_smooth
autogkresults
function/************************************************************************* Adaptive integration results Called after AutoGKIteration returned False. Input parameters: State - algorithm state (used by AutoGKIteration). Output parameters: V - integral(f(x)dx,a,b) Rep - optimization report (see AutoGKReport description) -- ALGLIB -- Copyright 14.11.2007 by Bochkanov Sergey *************************************************************************/void autogkresults(const autogkstate& state, double& v, autogkreport& rep);
Examples: autogk_singular autogk_smooth
autogksingular
function/************************************************************************* Integration on a finite interval [A,B]. Integrand have integrable singularities at A/B. F(X) must diverge as "(x-A)^alpha" at A, as "(B-x)^beta" at B, with known alpha/beta (alpha>-1, beta>-1). If alpha/beta are not known, estimates from below can be used (but these estimates should be greater than -1 too). One of alpha/beta variables (or even both alpha/beta) may be equal to 0, which means than function F(x) is non-singular at A/B. Anyway (singular at bounds or not), function F(x) is supposed to be continuous on (A,B). Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) Alpha - power-law coefficient of the F(x) at A, Alpha>-1 Beta - power-law coefficient of the F(x) at B, Beta>-1 OUTPUT PARAMETERS State - structure which stores algorithm state between subsequent calls of AutoGKIteration. Used for reverse communication. This structure should be passed to the AutoGKIteration subroutine. SEE ALSO AutoGKSmooth, AutoGKSmoothW, AutoGKIteration, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/void autogksingular(double a, double b, double alpha, double beta, autogkstate& state);
Examples: autogk_singular
autogksmooth
function/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. Algorithm works well only with smooth integrands. It may be used with continuous non-smooth integrands, but with less performance. It should never be used with integrands which have integrable singularities at lower or upper limits - algorithm may crash. Use AutoGKSingular in such cases. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state between subsequent calls of AutoGKIteration. Used for reverse communication. This structure should be passed to the AutoGKIteration subroutine. SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKIteration, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/void autogksmooth(double a, double b, autogkstate& state);
Examples: autogk_smooth
autogksmoothw
function/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. This subroutine is same as AutoGKSmooth(), but it guarantees that interval [a,b] is partitioned into subintervals which have width at most XWidth. Subroutine can be used when integrating nearly-constant function with narrow "bumps" (about XWidth wide). If "bumps" are too narrow, AutoGKSmooth subroutine can overlook them. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state between subsequent calls of AutoGKIteration. Used for reverse communication. This structure should be passed to the AutoGKIteration subroutine. SEE ALSO AutoGKSmooth, AutoGKSingular, AutoGKIteration, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/void autogksmoothw(double a, double b, double xwidth, autogkstate& state);
autogkstate state; double v; autogkreport rep; double a; double b; double alpha; // // f1(x) = (1+x)*(x-a)^alpha, alpha=-0.3 // Exact answer is (B-A)^(Alpha+2)/(Alpha+2) + (1+A)*(B-A)^(Alpha+1)/(Alpha+1) // // This code demonstrates use of the State.XMinusA (State.BMinusX) field. // // If we try to use State.X instead of State.XMinusA, // we will end up dividing by zero! (in 64-bit precision) // a = 1.0; b = 5.0; alpha = -0.9; autogksingular(a, b, alpha, 0.0, state); while(autogkiteration(state)) { state.f = pow(state.xminusa, alpha)*(1+state.x); } autogkresults(state, v, rep); printf("integral((1+x)*(x-a)^alpha) on [%0.1lf; %0.1lf] = %0.2lf\nExact answer is %0.2lf\n", double(a), double(b), double(v), double(pow(b-a, alpha+2)/(alpha+2)+(1+a)*pow(b-a, alpha+1)/(alpha+1)));
autogkstate state; double v; autogkreport rep; // // f(x) = x*sin(x), integrated at [-pi, pi]. // Exact answer is 2*pi // autogksmooth(-ap::pi(), +ap::pi(), state); while(autogkiteration(state)) { state.f = state.x*sin(state.x); } autogkresults(state, v, rep); printf("integral(x*sin(x),-pi,+pi) = %0.2lf\nExact answer is %0.2lf\n", double(v), double(2*ap::pi()));
bdsvd
unitrmatrixbdsvd
function/************************************************************************* Singular value decomposition of a bidiagonal matrix (extended algorithm) The algorithm performs the singular value decomposition of a bidiagonal matrix B (upper or lower) representing it as B = Q*S*P^T, where Q and P - orthogonal matrices, S - diagonal matrix with non-negative elements on the main diagonal, in descending order. The algorithm finds singular values. In addition, the algorithm can calculate matrices Q and P (more precisely, not the matrices, but their product with given matrices U and VT - U*Q and (P^T)*VT)). Of course, matrices U and VT can be of any type, including identity. Furthermore, the algorithm can calculate Q'*C (this product is calculated more effectively than U*Q, because this calculation operates with rows instead of matrix columns). The feature of the algorithm is its ability to find all singular values including those which are arbitrarily close to 0 with relative accuracy close to machine precision. If the parameter IsFractionalAccuracyRequired is set to True, all singular values will have high relative accuracy close to machine precision. If the parameter is set to False, only the biggest singular value will have relative accuracy close to machine precision. The absolute error of other singular values is equal to the absolute error of the biggest singular value. Input parameters: D - main diagonal of matrix B. Array whose index ranges within [0..N-1]. E - superdiagonal (or subdiagonal) of matrix B. Array whose index ranges within [0..N-2]. N - size of matrix B. IsUpper - True, if the matrix is upper bidiagonal. IsFractionalAccuracyRequired - accuracy to search singular values with. U - matrix to be multiplied by Q. Array whose indexes range within [0..NRU-1, 0..N-1]. The matrix can be bigger, in that case only the submatrix [0..NRU-1, 0..N-1] will be multiplied by Q. NRU - number of rows in matrix U. C - matrix to be multiplied by Q'. Array whose indexes range within [0..N-1, 0..NCC-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCC-1] will be multiplied by Q'. NCC - number of columns in matrix C. VT - matrix to be multiplied by P^T. Array whose indexes range within [0..N-1, 0..NCVT-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCVT-1] will be multiplied by P^T. NCVT - number of columns in matrix VT. Output parameters: D - singular values of matrix B in descending order. U - if NRU>0, contains matrix U*Q. VT - if NCVT>0, contains matrix (P^T)*VT. C - if NCC>0, contains matrix Q'*C. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). Additional information: The type of convergence is controlled by the internal parameter TOL. If the parameter is greater than 0, the singular values will have relative accuracy TOL. If TOL<0, the singular values will have absolute accuracy ABS(TOL)*norm(B). By default, |TOL| falls within the range of 10*Epsilon and 100*Epsilon, where Epsilon is the machine precision. It is not recommended to use TOL less than 10*Epsilon since this will considerably slow down the algorithm and may not lead to error decreasing. History: * 31 March, 2007. changed MAXITR from 6 to 12. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1999. *************************************************************************/bool rmatrixbdsvd(ap::real_1d_array& d, ap::real_1d_array e, int n, bool isupper, bool isfractionalaccuracyrequired, ap::real_2d_array& u, int nru, ap::real_2d_array& c, int ncc, ap::real_2d_array& vt, int ncvt);
bessel
unitbesseli0
function/************************************************************************* Modified Bessel function of order zero Returns modified Bessel function of order zero of the argument. The function is defined as i0(x) = j0( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 5.8e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double besseli0(double x);
besseli1
function/************************************************************************* Modified Bessel function of order one Returns modified Bessel function of order one of the argument. The function is defined as i1(x) = -i j1( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.9e-15 2.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/double besseli1(double x);
besselj0
function/************************************************************************* Bessel function of order zero Returns Bessel function of order zero of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval the following rational approximation is used: 2 2 (w - r ) (w - r ) P (w) / Q (w) 1 2 3 8 2 where w = x and the two r's are zeros of the function. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 60000 4.2e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double besselj0(double x);
besselj1
function/************************************************************************* Bessel function of order one Returns Bessel function of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 24 term Chebyshev expansion is used. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 2.6e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double besselj1(double x);
besseljn
function/************************************************************************* Bessel function of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The ratio of jn(x) to j0(x) is computed by backward recurrence. First the ratio jn/jn-1 is found by a continued fraction expansion. Then the recurrence relating successive orders is applied until j0 or j1 is reached. If n = 0 or 1 the routine for j0 or j1 is called directly. ACCURACY: Absolute error: arithmetic range # trials peak rms IEEE 0, 30 5000 4.4e-16 7.9e-17 Not suitable for large n or x. Use jv() (fractional order) instead. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double besseljn(int n, double x);
besselk0
function/************************************************************************* Modified Bessel function, second kind, order zero Returns modified Bessel function of the second kind of order zero of the argument. The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Tested at 2000 random points between 0 and 8. Peak absolute error (relative when K0 > 1) was 1.46e-14; rms, 4.26e-15. Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double besselk0(double x);
besselk1
function/************************************************************************* Modified Bessel function, second kind, order one Computes the modified Bessel function of the second kind of order one of the argument. The range is partitioned into the two intervals [0,2] and (2, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double besselk1(double x);
besselkn
function/************************************************************************* Modified Bessel function, second kind, integer order Returns modified Bessel function of the second kind of order n of the argument. The range is partitioned into the two intervals [0,9.55] and (9.55, infinity). An ascending power series is used in the low range, and an asymptotic expansion in the high range. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 90000 1.8e-8 3.0e-10 Error is high only near the crossover point x = 9.55 between the two expansions used. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 2000 by Stephen L. Moshier *************************************************************************/double besselkn(int nn, double x);
bessely0
function/************************************************************************* Bessel function of the second kind, order zero Returns Bessel function of the second kind, of order zero, of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval a rational approximation R(x) is employed to compute y0(x) = R(x) + 2 * log(x) * j0(x) / PI. Thus a call to j0() is required. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error, when y0(x) < 1; else relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.3e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double bessely0(double x);
bessely1
function/************************************************************************* Bessel function of second kind of order one Returns Bessel function of the second kind of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 25 term Chebyshev expansion is used, and a call to j1() is required. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.0e-15 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double bessely1(double x);
besselyn
function/************************************************************************* Bessel function of second kind of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The function is evaluated by forward recurrence on n, starting with values computed by the routines y0() and y1(). If n = 0 or 1 the routine for y0 or y1 is called directly. ACCURACY: Absolute error, except relative when y > 1: arithmetic domain # trials peak rms IEEE 0, 30 30000 3.4e-15 4.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double besselyn(int n, double x);
betaf
unitbeta
function/************************************************************************* Beta function - - | (a) | (b) beta( a, b ) = -----------. - | (a+b) For large arguments the logarithm of the function is evaluated using lgam(), then exponentiated. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 8.1e-14 1.1e-14 Cephes Math Library Release 2.0: April, 1987 Copyright 1984, 1987 by Stephen L. Moshier *************************************************************************/double beta(double a, double b);
binomialdistr
unitbinomialcdistribution
function/************************************************************************* Complemented binomial distribution Returns the sum of the terms k+1 through n of the Binomial probability density: n -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=k+1 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtrc( k, n, p ) = incbet( k+1, n-k, p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 6.7e-15 8.2e-16 For p between 0 and .001: IEEE 0,100 100000 1.5e-13 2.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double binomialcdistribution(int k, int n, double p);
binomialdistribution
function/************************************************************************* Binomial distribution Returns the sum of the terms 0 through k of the Binomial probability density: k -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=0 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtr( k, n, p ) = incbet( n-k, k+1, 1-p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p), with p between 0 and 1. a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 4.3e-15 2.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double binomialdistribution(int k, int n, double p);
invbinomialdistribution
function/************************************************************************* Inverse binomial distribution Finds the event probability p such that the sum of the terms 0 through k of the Binomial probability density is equal to the given cumulative probability y. This is accomplished using the inverse beta integral function and the relation 1 - p = incbi( n-k, k+1, y ). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 2.3e-14 6.4e-16 IEEE 0,10000 100000 6.6e-12 1.2e-13 For p between 10^-6 and 0.001: IEEE 0,100 100000 2.0e-12 1.3e-14 IEEE 0,10000 100000 1.5e-12 3.2e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double invbinomialdistribution(int k, int n, double y);
chebyshev
unitchebyshevcalculate
function/************************************************************************* Calculation of the value of the Chebyshev polynomials of the first and second kinds. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument, -1 <= x <= 1 Result: the value of the Chebyshev polynomial at x *************************************************************************/double chebyshevcalculate(const int& r, const int& n, const double& x);
chebyshevcoefficients
function/************************************************************************* Representation of Tn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void chebyshevcoefficients(const int& n, ap::real_1d_array& c);
chebyshevsum
function/************************************************************************* Summation of Chebyshev polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*T0(x) + c[1]*T1(x) + ... + c[N]*TN(x) or c[0]*U0(x) + c[1]*U1(x) + ... + c[N]*UN(x) depending on the R. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument Result: the value of the Chebyshev polynomial at x *************************************************************************/double chebyshevsum(const ap::real_1d_array& c, const int& r, const int& n, const double& x);
fromchebyshev
function/************************************************************************* Conversion of a series of Chebyshev polynomials to a power series. Represents A[0]*T0(x) + A[1]*T1(x) + ... + A[N]*Tn(x) as B[0] + B[1]*X + ... + B[N]*X^N. Input parameters: A - Chebyshev series coefficients N - degree, N>=0 Output parameters B - power series coefficients *************************************************************************/void fromchebyshev(const ap::real_1d_array& a, const int& n, ap::real_1d_array& b);
chisquaredistr
unitchisquarecdistribution
function/************************************************************************* Complemented Chi-square distribution Returns the area under the right hand tail (from x to infinity) of the Chi square probability density function with v degrees of freedom: inf. - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - x where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igamc( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double chisquarecdistribution(double v, double x);
chisquaredistribution
function/************************************************************************* Chi-square distribution Returns the area under the left hand tail (from 0 to x) of the Chi square probability density function with v degrees of freedom. x - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - 0 where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igam( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double chisquaredistribution(double v, double x);
invchisquaredistribution
function/************************************************************************* Inverse of complemented Chi-square distribution Finds the Chi-square argument x such that the integral from x to infinity of the Chi-square density is equal to the given cumulative probability y. This is accomplished using the inverse gamma integral function and the relation x/2 = igami( df/2, y ); ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double invchisquaredistribution(double v, double y);
conv
unitconvc1d
function/************************************************************************* 1-dimensional complex convolution. For given A/B returns conv(A,B) (non-circular). Subroutine can automatically choose between three implementations: straightforward O(M*N) formula for very small N (or M), overlap-add algorithm for cases where max(M,N) is significantly larger than min(M,N), but O(M*N) algorithm is too slow, and general FFT-based formula for cases where two previois algorithms are too slow. Algorithm has max(M,N)*log(max(M,N)) complexity for any M/N. INPUT PARAMETERS A - array[0..M-1] - complex function to be transformed M - problem size B - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convc1d(const ap::complex_1d_array& a, int m, const ap::complex_1d_array& b, int n, ap::complex_1d_array& r);
convc1dcircular
function/************************************************************************* 1-dimensional circular complex convolution. For given S/R returns conv(S,R) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: normal convolution is commutative, i.e. it is symmetric - conv(A,B)=conv(B,A). Cyclic convolution IS NOT. One function - S - is a signal, periodic function, and another - R - is a response, non-periodic function with limited length. INPUT PARAMETERS S - array[0..M-1] - complex periodic signal M - problem size B - array[0..N-1] - complex non-periodic response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convc1dcircular(const ap::complex_1d_array& s, int m, const ap::complex_1d_array& r, int n, ap::complex_1d_array& c);
convc1dcircularinv
function/************************************************************************* 1-dimensional circular complex deconvolution (inverse of ConvC1DCircular()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved periodic signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - non-periodic response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-1]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convc1dcircularinv(const ap::complex_1d_array& a, int m, const ap::complex_1d_array& b, int n, ap::complex_1d_array& r);
convc1dinv
function/************************************************************************* 1-dimensional complex non-circular deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convc1dinv(const ap::complex_1d_array& a, int m, const ap::complex_1d_array& b, int n, ap::complex_1d_array& r);
convc1dx
function/************************************************************************* 1-dimensional complex convolution. Extended subroutine which allows to choose convolution algorithm. Intended for internal use, ALGLIB users should call ConvC1D()/ConvC1DCircular(). INPUT PARAMETERS A - array[0..M-1] - complex function to be transformed M - problem size B - array[0..N-1] - complex function to be transformed N - problem size, N<=M Alg - algorithm type: *-2 auto-select Q for overlap-add *-1 auto-select algorithm and parameters * 0 straightforward formula for small N's * 1 general FFT-based code * 2 overlap-add with length Q Q - length for overlap-add OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convc1dx(const ap::complex_1d_array& a, int m, const ap::complex_1d_array& b, int n, bool circular, int alg, int q, ap::complex_1d_array& r);
convr1d
function/************************************************************************* 1-dimensional real convolution. Analogous to ConvC1D(), see ConvC1D() comments for more details. INPUT PARAMETERS A - array[0..M-1] - real function to be transformed M - problem size B - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convr1d(const ap::real_1d_array& a, int m, const ap::real_1d_array& b, int n, ap::real_1d_array& r);
convr1dcircular
function/************************************************************************* 1-dimensional circular real convolution. Analogous to ConvC1DCircular(), see ConvC1DCircular() comments for more details. INPUT PARAMETERS S - array[0..M-1] - real signal M - problem size B - array[0..N-1] - real response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convr1dcircular(const ap::real_1d_array& s, int m, const ap::real_1d_array& r, int n, ap::real_1d_array& c);
convr1dcircularinv
function/************************************************************************* 1-dimensional complex deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convr1dcircularinv(const ap::real_1d_array& a, int m, const ap::real_1d_array& b, int n, ap::real_1d_array& r);
convr1dinv
function/************************************************************************* 1-dimensional real deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convr1dinv(const ap::real_1d_array& a, int m, const ap::real_1d_array& b, int n, ap::real_1d_array& r);
convr1dx
function/************************************************************************* 1-dimensional real convolution. Extended subroutine which allows to choose convolution algorithm. Intended for internal use, ALGLIB users should call ConvR1D(). INPUT PARAMETERS A - array[0..M-1] - complex function to be transformed M - problem size B - array[0..N-1] - complex function to be transformed N - problem size, N<=M Alg - algorithm type: *-2 auto-select Q for overlap-add *-1 auto-select algorithm and parameters * 0 straightforward formula for small N's * 1 general FFT-based code * 2 overlap-add with length Q Q - length for overlap-add OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void convr1dx(const ap::real_1d_array& a, int m, const ap::real_1d_array& b, int n, bool circular, int alg, int q, ap::real_1d_array& r);
corr
unitcorrc1d
function/************************************************************************* 1-dimensional complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, pattern to search withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(conj(pattern[j])*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(conj(pattern[j])*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void corrc1d(const ap::complex_1d_array& signal, int n, const ap::complex_1d_array& pattern, int m, ap::complex_1d_array& r);
corrc1dcircular
function/************************************************************************* 1-dimensional circular complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void corrc1dcircular(const ap::complex_1d_array& signal, int m, const ap::complex_1d_array& pattern, int n, ap::complex_1d_array& c);
corrr1d
function/************************************************************************* 1-dimensional real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, pattern to search withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(pattern[j]*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(pattern[j]*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void corrr1d(const ap::real_1d_array& signal, int n, const ap::real_1d_array& pattern, int m, ap::real_1d_array& r);
corrr1dcircular
function/************************************************************************* 1-dimensional circular real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void corrr1dcircular(const ap::real_1d_array& signal, int m, const ap::real_1d_array& pattern, int n, ap::real_1d_array& c);
correlation
unitpearsoncorrelation
function/************************************************************************* Pearson product-moment correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - sample size. Result: Pearson product-moment correlation coefficient -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/double pearsoncorrelation(const ap::real_1d_array& x, const ap::real_1d_array& y, int n);
spearmanrankcorrelation
function/************************************************************************* Spearman's rank correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - sample size. Result: Spearman's rank correlation coefficient -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/double spearmanrankcorrelation(ap::real_1d_array x, ap::real_1d_array y, int n);
correlationtests
unitpearsoncorrelationsignificance
function/************************************************************************* Pearson's correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5 * normality of distributions of X and Y. Input parameters: R - Pearson's correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void pearsoncorrelationsignificance(double r, int n, double& bothtails, double& lefttail, double& righttail);
spearmanrankcorrelationsignificance
function/************************************************************************* Spearman's rank correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5. The test is non-parametric and doesn't require distributions X and Y to be normal. Input parameters: R - Spearman's rank correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void spearmanrankcorrelationsignificance(double r, int n, double& bothtails, double& lefttail, double& righttail);
dawson
unitdawsonintegral
function/************************************************************************* Dawson's Integral Approximates the integral x - 2 | | 2 dawsn(x) = exp( -x ) | exp( t ) dt | | - 0 Three different rational approximations are employed, for the intervals 0 to 3.25; 3.25 to 6.25; and 6.25 up. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,10 10000 6.9e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double dawsonintegral(double x);
densesolver
unitcmatrixlusolve
function/************************************************************************* Dense solver. Same as RMatrixLUSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void cmatrixlusolve(const ap::complex_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::complex_1d_array& b, int& info, densesolverreport& rep, ap::complex_1d_array& x);
cmatrixlusolvem
function/************************************************************************* Dense solver. Same as RMatrixLUSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void cmatrixlusolvem(const ap::complex_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::complex_2d_array& b, int m, int& info, densesolverreport& rep, ap::complex_2d_array& x);
cmatrixmixedsolve
function/************************************************************************* Dense solver. Same as RMatrixMixedSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void cmatrixmixedsolve(const ap::complex_2d_array& a, const ap::complex_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::complex_1d_array& b, int& info, densesolverreport& rep, ap::complex_1d_array& x);
cmatrixmixedsolvem
function/************************************************************************* Dense solver. Same as RMatrixMixedSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void cmatrixmixedsolvem(const ap::complex_2d_array& a, const ap::complex_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::complex_2d_array& b, int m, int& info, densesolverreport& rep, ap::complex_2d_array& x);
cmatrixsolve
function/************************************************************************* Dense solver. Same as RMatrixSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void cmatrixsolve(const ap::complex_2d_array& a, int n, const ap::complex_1d_array& b, int& info, densesolverreport& rep, ap::complex_1d_array& x);
cmatrixsolvem
function/************************************************************************* Dense solver. Same as RMatrixSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3+M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void cmatrixsolvem(const ap::complex_2d_array& a, int n, const ap::complex_2d_array& b, int m, bool rfs, int& info, densesolverreport& rep, ap::complex_2d_array& x);
hpdmatrixcholeskysolve
function/************************************************************************* Dense solver. Same as RMatrixLUSolve(), but for HPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void hpdmatrixcholeskysolve(const ap::complex_2d_array& cha, int n, bool isupper, const ap::complex_1d_array& b, int& info, densesolverreport& rep, ap::complex_1d_array& x);
hpdmatrixcholeskysolvem
function/************************************************************************* Dense solver. Same as RMatrixLUSolveM(), but for HPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, HPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void hpdmatrixcholeskysolvem(const ap::complex_2d_array& cha, int n, bool isupper, const ap::complex_2d_array& b, int m, int& info, densesolverreport& rep, ap::complex_2d_array& x);
hpdmatrixsolve
function/************************************************************************* Dense solver. Same as RMatrixSolve(), but for Hermitian positive definite matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Returns -3 for non-HPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void hpdmatrixsolve(const ap::complex_2d_array& a, int n, bool isupper, const ap::complex_1d_array& b, int& info, densesolverreport& rep, ap::complex_1d_array& x);
hpdmatrixsolvem
function/************************************************************************* Dense solver. Same as RMatrixSolveM(), but for Hermitian positive definite matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve. Returns -3 for non-HPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void hpdmatrixsolvem(const ap::complex_2d_array& a, int n, bool isupper, const ap::complex_2d_array& b, int m, int& info, densesolverreport& rep, ap::complex_2d_array& x);
rmatrixlusolve
function/************************************************************************* Dense solver. This subroutine solves a system A*X=B, where A is NxN non-denegerate real matrix given by its LU decomposition, X and B are NxM real matrices. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void rmatrixlusolve(const ap::real_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::real_1d_array& b, int& info, densesolverreport& rep, ap::real_1d_array& x);
rmatrixlusolvem
function/************************************************************************* Dense solver. Similar to RMatrixLUSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void rmatrixlusolvem(const ap::real_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::real_2d_array& b, int m, int& info, densesolverreport& rep, ap::real_2d_array& x);
rmatrixmixedsolve
function/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where BOTH ORIGINAL A AND ITS LU DECOMPOSITION ARE KNOWN. You can use it if for some reasons you have both A and its LU decomposition. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void rmatrixmixedsolve(const ap::real_2d_array& a, const ap::real_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::real_1d_array& b, int& info, densesolverreport& rep, ap::real_1d_array& x);
rmatrixmixedsolvem
function/************************************************************************* Dense solver. Similar to RMatrixMixedSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void rmatrixmixedsolvem(const ap::real_2d_array& a, const ap::real_2d_array& lua, const ap::integer_1d_array& p, int n, const ap::real_2d_array& b, int m, int& info, densesolverreport& rep, ap::real_2d_array& x);
rmatrixsolve
function/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix, x and b are vectors. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 A is singular, or VERY close to singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - solver report, see below for more info X - array[0..N-1], it contains: * solution of A*x=b if A is non-singular (well-conditioned or ill-conditioned, but not very close to singular) * zeros, if A is singular or VERY close to singular (in this case Info=-3). SOLVER REPORT Subroutine sets following fields of the Rep structure: * R1 reciprocal of condition number: 1/cond(A), 1-norm. * RInf reciprocal of condition number: 1/cond(A), inf-norm. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void rmatrixsolve(const ap::real_2d_array& a, int n, const ap::real_1d_array& b, int& info, densesolverreport& rep, ap::real_1d_array& x);
rmatrixsolvels
function/************************************************************************* Dense solver. This subroutine finds solution of the linear system A*X=B with non-square, possibly degenerate A. System is solved in the least squares sense, and general least squares solution X = X0 + CX*y which minimizes |A*X-B| is returned. If A is non-degenerate, solution in the usual sense is returned Algorithm features: * automatic detection of degenerate cases * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..NRows-1,0..NCols-1], system matrix NRows - vertical size of A NCols - horizontal size of A B - array[0..NCols-1], right part Threshold- a number in [0,1]. Singular values beyond Threshold are considered zero. Set it to 0.0, if you don't understand what it means, so the solver will choose good value on its own. OUTPUT PARAMETERS Info - return code: * -4 SVD subroutine failed * -1 if NRows<=0 or NCols<=0 or Threshold<0 was passed * 1 if task is solved Rep - solver report, see below for more info X - array[0..N-1,0..M-1], it contains: * solution of A*X=B if A is non-singular (well-conditioned or ill-conditioned, but not very close to singular) * zeros, if A is singular or VERY close to singular (in this case Info=-3). SOLVER REPORT Subroutine sets following fields of the Rep structure: * R2 reciprocal of condition number: 1/cond(A), 2-norm. * N = NCols * K dim(Null(A)) * CX array[0..N-1,0..K-1], kernel of A. Columns of CX store such vectors that A*CX[i]=0. -- ALGLIB -- Copyright 24.08.2009 by Bochkanov Sergey *************************************************************************/void rmatrixsolvels(const ap::real_2d_array& a, int nrows, int ncols, const ap::real_1d_array& b, double threshold, int& info, densesolverlsreport& rep, ap::real_1d_array& x);
rmatrixsolvem
function/************************************************************************* Dense solver. Similar to RMatrixSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * condition number estimation * optional iterative refinement * O(N^3+M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void rmatrixsolvem(const ap::real_2d_array& a, int n, const ap::real_2d_array& b, int m, bool rfs, int& info, densesolverreport& rep, ap::real_2d_array& x);
spdmatrixcholeskysolve
function/************************************************************************* Dense solver. Same as RMatrixLUSolve(), but for SPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void spdmatrixcholeskysolve(const ap::real_2d_array& cha, int n, bool isupper, const ap::real_1d_array& b, int& info, densesolverreport& rep, ap::real_1d_array& x);
spdmatrixcholeskysolvem
function/************************************************************************* Dense solver. Same as RMatrixLUSolveM(), but for SPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void spdmatrixcholeskysolvem(const ap::real_2d_array& cha, int n, bool isupper, const ap::real_2d_array& b, int m, int& info, densesolverreport& rep, ap::real_2d_array& x);
spdmatrixsolve
function/************************************************************************* Dense solver. Same as RMatrixSolve(), but for SPD matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Returns -3 for non-SPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void spdmatrixsolve(const ap::real_2d_array& a, int n, bool isupper, const ap::real_1d_array& b, int& info, densesolverreport& rep, ap::real_1d_array& x);
spdmatrixsolvem
function/************************************************************************* Dense solver. Same as RMatrixSolveM(), but for symmetric positive definite matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve. Returns -3 for non-SPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void spdmatrixsolvem(const ap::real_2d_array& a, int n, bool isupper, const ap::real_2d_array& b, int m, int& info, densesolverreport& rep, ap::real_2d_array& x);
descriptivestatistics
unitcalculateadev
function/************************************************************************* ADev Input parameters: X - sample (array indexes: [0..N-1]) N - sample size Output parameters: ADev- ADev -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/void calculateadev(const ap::real_1d_array& x, int n, double& adev);
calculatemedian
function/************************************************************************* Median calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - sample size Output parameters: Median -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/void calculatemedian(ap::real_1d_array x, int n, double& median);
calculatemoments
function/************************************************************************* Calculation of the distribution moments: mean, variance, slewness, kurtosis. Input parameters: X - sample. Array with whose indexes range within [0..N-1] N - sample size. Output parameters: Mean - mean. Variance- variance. Skewness- skewness (if variance<>0; zero otherwise). Kurtosis- kurtosis (if variance<>0; zero otherwise). -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/void calculatemoments(const ap::real_1d_array& x, int n, double& mean, double& variance, double& skewness, double& kurtosis);
calculatepercentile
function/************************************************************************* Percentile calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - sample size, N>1 P - percentile (0<=P<=1) Output parameters: V - percentile -- ALGLIB -- Copyright 01.03.2008 by Bochkanov Sergey *************************************************************************/void calculatepercentile(ap::real_1d_array x, int n, double p, double& v);
dforest
unitdfavgce
function/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double dfavgce(const decisionforest& df, const ap::real_2d_array& xy, int npoints);
dfavgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double dfavgerror(const decisionforest& df, const ap::real_2d_array& xy, int npoints);
dfavgrelerror
function/************************************************************************* Average relative error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double dfavgrelerror(const decisionforest& df, const ap::real_2d_array& xy, int npoints);
dfbuildrandomdecisionforest
function/************************************************************************* This subroutine builds random decision forest. INPUT PARAMETERS: XY - training set NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - task type: * NClasses=1 - regression task with one dependent variable * NClasses>1 - classification task with NClasses classes. NTrees - number of trees in a forest, NTrees>=1. recommended values: 50-100. R - percent of a training set used to build individual trees. 0<R<=1. recommended values: 0.1 <= R <= 0.66. OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<1, NVars<1, NClasses<1, NTrees<1, R<=0 or R>1). * 1, if task has been solved DF - model built Rep - training report, contains error on a training set and out-of-bag estimates of generalization error. -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/void dfbuildrandomdecisionforest(const ap::real_2d_array& xy, int npoints, int nvars, int nclasses, int ntrees, double r, int& info, decisionforest& df, dfreport& rep);
dfcopy
function/************************************************************************* Copying of DecisionForest strucure INPUT PARAMETERS: DF1 - original OUTPUT PARAMETERS: DF2 - copy -- ALGLIB -- Copyright 13.02.2009 by Bochkanov Sergey *************************************************************************/void dfcopy(const decisionforest& df1, decisionforest& df2);
dfprocess
function/************************************************************************* Procesing INPUT PARAMETERS: DF - decision forest model X - input vector, array[0..NVars-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. Subroutine does not allocate memory for this vector, it is responsibility of a caller to allocate it. Array must be at least [0..NClasses-1]. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/void dfprocess(const decisionforest& df, const ap::real_1d_array& x, ap::real_1d_array& y);
dfrelclserror
function/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double dfrelclserror(const decisionforest& df, const ap::real_2d_array& xy, int npoints);
dfrmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double dfrmserror(const decisionforest& df, const ap::real_2d_array& xy, int npoints);
dfserialize
function/************************************************************************* Serialization of DecisionForest strucure INPUT PARAMETERS: DF - original OUTPUT PARAMETERS: RA - array of real numbers which stores decision forest, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 13.02.2009 by Bochkanov Sergey *************************************************************************/void dfserialize(const decisionforest& df, ap::real_1d_array& ra, int& rlen);
dfunserialize
function/************************************************************************* Unserialization of DecisionForest strucure INPUT PARAMETERS: RA - real array which stores decision forest OUTPUT PARAMETERS: DF - restored structure -- ALGLIB -- Copyright 13.02.2009 by Bochkanov Sergey *************************************************************************/void dfunserialize(const ap::real_1d_array& ra, decisionforest& df);
elliptic
unitellipticintegrale
function/************************************************************************* Complete elliptic integral of the second kind Approximates the integral pi/2 - | | 2 E(m) = | sqrt( 1 - m sin t ) dt | | - 0 using the approximation P(x) - x log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 1 10000 2.1e-16 7.3e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double ellipticintegrale(double m);
ellipticintegralk
function/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 using the approximation P(x) - log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double ellipticintegralk(double m);
ellipticintegralkhighprecision
function/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 where m = 1 - m1, using the approximation P(x) - log x Q(x). The argument m1 is used rather than m so that the logarithmic singularity at m = 1 will be shifted to the origin; this preserves maximum accuracy. K(0) = pi/2. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Àëãîðèòì âçÿò èç áèáëèîòåêè Cephes *************************************************************************/double ellipticintegralkhighprecision(double m1);
incompleteellipticintegrale
function/************************************************************************* Incomplete elliptic integral of the second kind Approximates the integral phi - | | | 2 E(phi_\m) = | sqrt( 1 - m sin t ) dt | | | - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random arguments with phi in [-10, 10] and m in [0, 1]. Relative error: arithmetic domain # trials peak rms IEEE -10,10 150000 3.3e-15 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1993, 2000 by Stephen L. Moshier *************************************************************************/double incompleteellipticintegrale(double phi, double m);
incompleteellipticintegralk
function/************************************************************************* Incomplete elliptic integral of the first kind F(phi|m) Approximates the integral phi - | | | dt F(phi_\m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random points with m in [0, 1] and phi as indicated. Relative error: arithmetic domain # trials peak rms IEEE -10,10 200000 7.4e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double incompleteellipticintegralk(double phi, double m);
evd
unithmatrixevd
function/************************************************************************* Finding the eigenvalues and eigenvectors of a Hermitian matrix The algorithm finds eigen pairs of a Hermitian matrix by reducing it to real tridiagonal form and using the QL/QR algorithm. Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). Note: eigenvectors of Hermitian matrix are defined up to multiplication by a complex number L, such that |L|=1. -- ALGLIB -- Copyright 2005, 23 March 2007 by Bochkanov Sergey *************************************************************************/bool hmatrixevd(ap::complex_2d_array a, int n, int zneeded, bool isupper, ap::real_1d_array& d, ap::complex_2d_array& z);
hmatrixevdi
function/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a Hermitian matrix with given indexes by using bisection and inverse iteration methods Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/bool hmatrixevdi(ap::complex_2d_array a, int n, int zneeded, bool isupper, int i1, int i2, ap::real_1d_array& w, ap::complex_2d_array& z);
hmatrixevdr
function/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a Hermitian matrix in a given half-interval (A, B] by using a bisection and inverse iteration Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half-interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval, M>=0 W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/bool hmatrixevdr(ap::complex_2d_array a, int n, int zneeded, bool isupper, double b1, double b2, int& m, ap::real_1d_array& w, ap::complex_2d_array& z);
rmatrixevd
function/************************************************************************* Finding eigenvalues and eigenvectors of a general matrix The algorithm finds eigenvalues and eigenvectors of a general matrix by using the QR algorithm with multiple shifts. The algorithm can find eigenvalues and both left and right eigenvectors. The right eigenvector is a vector x such that A*x = w*x, and the left eigenvector is a vector y such that y'*A = w*y' (here y' implies a complex conjugate transposition of vector y). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. VNeeded - flag controlling whether eigenvectors are needed or not. If VNeeded is equal to: * 0, eigenvectors are not returned; * 1, right eigenvectors are returned; * 2, left eigenvectors are returned; * 3, both left and right eigenvectors are returned. Output parameters: WR - real parts of eigenvalues. Array whose index ranges within [0..N-1]. WR - imaginary parts of eigenvalues. Array whose index ranges within [0..N-1]. VL, VR - arrays of left and right eigenvectors (if they are needed). If WI[i]=0, the respective eigenvalue is a real number, and it corresponds to the column number I of matrices VL/VR. If WI[i]>0, we have a pair of complex conjugate numbers with positive and negative imaginary parts: the first eigenvalue WR[i] + sqrt(-1)*WI[i]; the second eigenvalue WR[i+1] + sqrt(-1)*WI[i+1]; WI[i]>0 WI[i+1] = -WI[i] < 0 In that case, the eigenvector corresponding to the first eigenvalue is located in i and i+1 columns of matrices VL/VR (the column number i contains the real part, and the column number i+1 contains the imaginary part), and the vector corresponding to the second eigenvalue is a complex conjugate to the first vector. Arrays whose indexes range within [0..N-1, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm has not converged. Note 1: Some users may ask the following question: what if WI[N-1]>0? WI[N] must contain an eigenvalue which is complex conjugate to the N-th eigenvalue, but the array has only size N? The answer is as follows: such a situation cannot occur because the algorithm finds a pairs of eigenvalues, therefore, if WI[i]>0, I is strictly less than N-1. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms of linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. See also the InternalTREVC subroutine. The algorithm is based on the LAPACK 3.0 library. *************************************************************************/bool rmatrixevd(ap::real_2d_array a, int n, int vneeded, ap::real_1d_array& wr, ap::real_1d_array& wi, ap::real_2d_array& vl, ap::real_2d_array& vr);
smatrixevd
function/************************************************************************* Finding the eigenvalues and eigenvectors of a symmetric matrix The algorithm finds eigen pairs of a symmetric matrix by reducing it to tridiagonal form and using the QL/QR algorithm. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/bool smatrixevd(ap::real_2d_array a, int n, int zneeded, bool isupper, ap::real_1d_array& d, ap::real_2d_array& z);
smatrixevdi
function/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a symmetric matrix with given indexes by using bisection and inverse iteration methods. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/bool smatrixevdi(ap::real_2d_array a, int n, int zneeded, bool isupper, int i1, int i2, ap::real_1d_array& w, ap::real_2d_array& z);
smatrixevdr
function/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a symmetric matrix in a given half open interval (A, B] by using a bisection and inverse iteration Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half open interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval (M>=0). W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/bool smatrixevdr(ap::real_2d_array a, int n, int zneeded, bool isupper, double b1, double b2, int& m, ap::real_1d_array& w, ap::real_2d_array& z);
smatrixtdevd
function/************************************************************************* Finding the eigenvalues and eigenvectors of a tridiagonal symmetric matrix The algorithm finds the eigen pairs of a tridiagonal symmetric matrix by using an QL/QR algorithm with implicit shifts. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix; * 2, the eigenvectors of a tridiagonal matrix replace the square matrix Z; * 3, matrix Z contains the first row of the eigenvectors matrix. Z - if ZNeeded=1, Z contains the square matrix by which the eigenvectors are multiplied. Array whose indexes range within [0..N-1, 0..N-1]. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains the product of a given matrix (from the left) and the eigenvectors matrix (from the right); * 2, Z contains the eigenvectors. * 3, Z contains the first row of the eigenvectors matrix. If ZNeeded<3, Z is the array whose indexes range within [0..N-1, 0..N-1]. In that case, the eigenvectors are stored in the matrix columns. If ZNeeded=3, Z is the array whose indexes range within [0..0, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/bool smatrixtdevd(ap::real_1d_array& d, ap::real_1d_array e, int n, int zneeded, ap::real_2d_array& z);
smatrixtdevdi
function/************************************************************************* Subroutine for finding tridiagonal matrix eigenvalues/vectors with given indexes (in ascending order) by using the bisection and inverse iteraion. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix. N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and Nx(I2-I1) matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..I2-I1]. * 2, contains the matrix of the eigenvalues found. Array whose indexes range within [0..N-1, 0..I2-I1]. Result: True, if successful. In that case, D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 25.12.2005 by Bochkanov Sergey *************************************************************************/bool smatrixtdevdi(ap::real_1d_array& d, const ap::real_1d_array& e, int n, int zneeded, int i1, int i2, ap::real_2d_array& z);
smatrixtdevdr
function/************************************************************************* Subroutine for finding the tridiagonal matrix eigenvalues/vectors in a given half-interval (A, B] by using bisection and inverse iteration. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix, N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. A, B - half-interval (A, B] to search eigenvalues in. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..M-1]. M - number of eigenvalues found in the given half-interval (M>=0). Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and NxM matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..M-1]. * 2, contains the matrix of the eigenvectors found. Array whose indexes range within [0..N-1, 0..M-1]. Result: True, if successful. In that case, M contains the number of eigenvalues in the given half-interval (could be equal to 0), D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 31.03.2008 by Bochkanov Sergey *************************************************************************/bool smatrixtdevdr(ap::real_1d_array& d, const ap::real_1d_array& e, int n, int zneeded, double a, double b, int& m, ap::real_2d_array& z);
expintegrals
unitexponentialintegralei
function/************************************************************************* Exponential integral Ei(x) x - t | | e Ei(x) = -|- --- dt . | | t - -inf Not defined for x <= 0. See also expn.c. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,100 50000 8.6e-16 1.3e-16 Cephes Math Library Release 2.8: May, 1999 Copyright 1999 by Stephen L. Moshier *************************************************************************/double exponentialintegralei(double x);
exponentialintegralen
function/************************************************************************* Exponential integral En(x) Evaluates the exponential integral inf. - | | -xt | e E (x) = | ---- dt. n | n | | t - 1 Both n and x must be nonnegative. The routine employs either a power series, a continued fraction, or an asymptotic formula depending on the relative values of n and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 10000 1.7e-15 3.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 2000 by Stephen L. Moshier *************************************************************************/double exponentialintegralen(double x, int n);
fdistr
unitfcdistribution
function/************************************************************************* Complemented F distribution Returns the area from x to infinity under the F density function (also known as Snedcor's density or the variance ratio density). inf. - 1 | | a-1 b-1 1-P(x) = ------ | t (1-t) dt B(a,b) | | - x The incomplete beta integral is used, according to the formula P(x) = incbet( df2/2, df1/2, (df2/(df2 + df1*x) ). ACCURACY: Tested at random points (a,b,x) in the indicated intervals. x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 1,100 100000 3.7e-14 5.9e-16 IEEE 1,5 1,100 100000 8.0e-15 1.6e-15 IEEE 0,1 1,10000 100000 1.8e-11 3.5e-13 IEEE 1,5 1,10000 100000 2.0e-11 3.0e-12 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double fcdistribution(int a, int b, double x);
fdistribution
function/************************************************************************* F distribution Returns the area from zero to x under the F density function (also known as Snedcor's density or the variance ratio density). This is the density of x = (u1/df1)/(u2/df2), where u1 and u2 are random variables having Chi square distributions with df1 and df2 degrees of freedom, respectively. The incomplete beta integral is used, according to the formula P(x) = incbet( df1/2, df2/2, (df1*x/(df2 + df1*x) ). The arguments a and b are greater than zero, and x is nonnegative. ACCURACY: Tested at random points (a,b,x). x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 0,100 100000 9.8e-15 1.7e-15 IEEE 1,5 0,100 100000 6.5e-15 3.5e-16 IEEE 0,1 1,10000 100000 2.2e-11 3.3e-12 IEEE 1,5 1,10000 100000 1.1e-11 1.7e-13 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double fdistribution(int a, int b, double x);
invfdistribution
function/************************************************************************* Inverse of complemented F distribution Finds the F density argument x such that the integral from x to infinity of the F density is equal to the given probability p. This is accomplished using the inverse beta integral function and the relations z = incbi( df2/2, df1/2, p ) x = df2 (1-z) / (df1 z). Note: the following relations hold for the inverse of the uncomplemented F distribution: z = incbi( df1/2, df2/2, p ) x = df2 z / (df1 (1-z)). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between .001 and 1: IEEE 1,100 100000 8.3e-15 4.7e-16 IEEE 1,10000 100000 2.1e-11 1.4e-13 For p between 10^-6 and 10^-3: IEEE 1,100 50000 1.3e-12 8.4e-15 IEEE 1,10000 50000 3.0e-12 4.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double invfdistribution(int a, int b, double y);
fft
unitfftc1d
function/************************************************************************* 1-dimensional complex FFT. Array size N may be arbitrary number (composite or prime). Composite N's are handled with cache-oblivious variation of a Cooley-Tukey algorithm. Small prime-factors are transformed using hard coded codelets (similar to FFTW codelets, but without low-level optimization), large prime-factors are handled with Bluestein's algorithm. Fastests transforms are for smooth N's (prime factors are 2, 3, 5 only), most fast for powers of 2. When N have prime factors larger than these, but orders of magnitude smaller than N, computations will be about 4 times slower than for nearby highly composite N's. When N itself is prime, speed will be 6 times lower. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS A - DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/void fftc1d(ap::complex_1d_array& a, int n);
fftc1dinv
function/************************************************************************* 1-dimensional complex inverse FFT. Array size N may be arbitrary number (composite or prime). Algorithm has O(N*logN) complexity for any N (composite or prime). See FFTC1D() description for more information about algorithm performance. INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]/N*exp(+2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/void fftc1dinv(ap::complex_1d_array& a, int n);
fftr1d
function/************************************************************************* 1-dimensional real FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS F - DFT of a input array, array[0..N-1] F[j] = SUM(A[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) NOTE: F[] satisfies symmetry property F[k] = conj(F[N-k]), so just one half of array is usually needed. But for convinience subroutine returns full complex array (with frequencies above N/2), so its result may be used by other FFT-related subroutines. -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/void fftr1d(const ap::real_1d_array& a, int n, ap::complex_1d_array& f);
fftr1dinternaleven
function/************************************************************************* Internal subroutine. Never call it directly! -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/void fftr1dinternaleven(ap::real_1d_array& a, int n, ap::real_1d_array& buf, ftplan& plan);
fftr1dinv
function/************************************************************************* 1-dimensional real inverse FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS F - array[0..floor(N/2)] - frequencies from forward real FFT N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] NOTE: F[] should satisfy symmetry property F[k] = conj(F[N-k]), so just one half of frequencies array is needed - elements from 0 to floor(N/2). F[0] is ALWAYS real. If N is even F[floor(N/2)] is real too. If N is odd, then F[floor(N/2)] has no special properties. Relying on properties noted above, FFTR1DInv subroutine uses only elements from 0th to floor(N/2)-th. It ignores imaginary part of F[0], and in case N is even it ignores imaginary part of F[floor(N/2)] too. So you can pass either frequencies array with N elements or reduced array with roughly N/2 elements - subroutine will successfully transform both. -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/void fftr1dinv(const ap::complex_1d_array& f, int n, ap::real_1d_array& a);
fftr1dinvinternaleven
function/************************************************************************* Internal subroutine. Never call it directly! -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/void fftr1dinvinternaleven(ap::real_1d_array& a, int n, ap::real_1d_array& buf, ftplan& plan);
fht
unitfhtr1d
function/************************************************************************* 1-dimensional Fast Hartley Transform. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS A - FHT of a input array, array[0..N-1], A_out[k] = sum(A_in[j]*(cos(2*pi*j*k/N)+sin(2*pi*j*k/N)), j=0..N-1) -- ALGLIB -- Copyright 04.06.2009 by Bochkanov Sergey *************************************************************************/void fhtr1d(ap::real_1d_array& a, int n);
fhtr1dinv
function/************************************************************************* 1-dimensional inverse FHT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse FHT of a input array, array[0..N-1] -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/void fhtr1dinv(ap::real_1d_array& a, int n);
fresnel
unitfresnelintegral
function/************************************************************************* Fresnel integral Evaluates the Fresnel integrals x - | | C(x) = | cos(pi/2 t**2) dt, | | - 0 x - | | S(x) = | sin(pi/2 t**2) dt. | | - 0 The integrals are evaluated by a power series for x < 1. For x >= 1 auxiliary functions f(x) and g(x) are employed such that C(x) = 0.5 + f(x) sin( pi/2 x**2 ) - g(x) cos( pi/2 x**2 ) S(x) = 0.5 - f(x) cos( pi/2 x**2 ) - g(x) sin( pi/2 x**2 ) ACCURACY: Relative error. Arithmetic function domain # trials peak rms IEEE S(x) 0, 10 10000 2.0e-15 3.2e-16 IEEE C(x) 0, 10 10000 1.8e-15 3.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/void fresnelintegral(double x, double& c, double& s);
gammafunc
unitgamma
function/************************************************************************* Gamma function Input parameters: X - argument Domain: 0 < X < 171.6 -170 < X < 0, X is not an integer. Relative error: arithmetic domain # trials peak rms IEEE -170,-33 20000 2.3e-15 3.3e-16 IEEE -33, 33 20000 9.4e-16 2.2e-16 IEEE 33, 171.6 20000 2.3e-15 3.2e-16 Cephes Math Library Release 2.8: June, 2000 Original copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/double gamma(double x);
lngamma
function/************************************************************************* Natural logarithm of gamma function Input parameters: X - argument Result: logarithm of the absolute value of the Gamma(X). Output parameters: SgnGam - sign(Gamma(X)) Domain: 0 < X < 2.55e305 -2.55e305 < X < 0, X is not an integer. ACCURACY: arithmetic domain # trials peak rms IEEE 0, 3 28000 5.4e-16 1.1e-16 IEEE 2.718, 2.556e305 40000 3.5e-16 8.3e-17 The error criterion was relative when the function magnitude was greater than one but absolute when it was less than one. The following test used the relative error criterion, though at certain points the relative error could be much higher than indicated. IEEE -200, -4 10000 4.8e-16 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/double lngamma(double x, double& sgngam);
gkq
unitgkqgenerategaussjacobi
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK * +2 OK, but quadrature rule have exterior nodes, x[0]<-1 or x[n-1]>+1 X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gkqgenerategaussjacobi(int n, double alpha, double beta, int& info, ap::real_1d_array& x, ap::real_1d_array& wkronrod, ap::real_1d_array& wgauss);
gkqgenerategausslegendre
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Legendre quadrature with N points. GKQLegendreCalc (calculation) or GKQLegendreTbl (precomputed table) is used depending on machine precision and number of nodes. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gkqgenerategausslegendre(int n, int& info, ap::real_1d_array& x, ap::real_1d_array& wkronrod, ap::real_1d_array& wgauss);
gkqgeneraterec
function/************************************************************************* Computation of nodes and weights of a Gauss-Kronrod quadrature formula The algorithm generates the N-point Gauss-Kronrod quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zero moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – alpha coefficients, array[0..floor(3*K/2)]. Beta – beta coefficients, array[0..ceil(3*K/2)]. Beta[0] is not used and may be arbitrary. Beta[I]>0. Mu0 – zeroth moment of the weight function. N – number of nodes of the Gauss-Kronrod quadrature formula, N >= 3, N = 2*K+1. OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 N is too large, task may be ill conditioned - x[i]=x[i+1] found. * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 08.05.2009 by Bochkanov Sergey *************************************************************************/void gkqgeneraterec(ap::real_1d_array alpha, ap::real_1d_array beta, double mu0, int n, int& info, ap::real_1d_array& x, ap::real_1d_array& wkronrod, ap::real_1d_array& wgauss);
gkqlegendrecalc
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points. Reduction to tridiagonal eigenproblem is used. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gkqlegendrecalc(int n, int& info, ap::real_1d_array& x, ap::real_1d_array& wkronrod, ap::real_1d_array& wgauss);
gkqlegendretbl
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points using pre-calculated table. Nodes/weights were computed with accuracy up to 1.0E-32 (if MPFR version of ALGLIB is used). In standard double precision accuracy reduces to something about 2.0E-16 (depending on your compiler's handling of long floating point constants). INPUT PARAMETERS: N - number of Kronrod nodes. N can be 15, 21, 31, 41, 51, 61. OUTPUT PARAMETERS: X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gkqlegendretbl(int n, ap::real_1d_array& x, ap::real_1d_array& wkronrod, ap::real_1d_array& wgauss, double& eps);
gq
unitgqgenerategausshermite
function/************************************************************************* Returns nodes/weights for Gauss-Hermite quadrature on (-inf,+inf) with weight function W(x)=Exp(-x*x) INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. May be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gqgenerategausshermite(int n, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
gqgenerategaussjacobi
function/************************************************************************* Returns nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha/Beta was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gqgenerategaussjacobi(int n, double alpha, double beta, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
gqgenerategausslaguerre
function/************************************************************************* Returns nodes/weights for Gauss-Laguerre quadrature on [0,+inf) with weight function W(x)=Power(x,Alpha)*Exp(-x) INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha is too close to -1 to obtain weights/nodes with high enough accuracy or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gqgenerategausslaguerre(int n, double alpha, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
gqgenerategausslegendre
function/************************************************************************* Returns nodes/weights for Gauss-Legendre quadrature on [-1,1] with N nodes. INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void gqgenerategausslegendre(int n, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
gqgenerategausslobattorec
function/************************************************************************* Computation of nodes and weights for a Gauss-Lobatto quadrature formula The algorithm generates the N-point Gauss-Lobatto quadrature formula with weight function given by coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – array[0..N-2], alpha coefficients Beta – array[0..N-2], beta coefficients. Zero-indexed element is not used, may be arbitrary. Beta[I]>0 Mu0 – zeroth moment of the weighting function. A – left boundary of the integration interval. B – right boundary of the integration interval. N – number of nodes of the quadrature formula, N>=3 (including the left and right boundary nodes). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/void gqgenerategausslobattorec(ap::real_1d_array alpha, ap::real_1d_array beta, double mu0, double a, double b, int n, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
gqgenerategaussradaurec
function/************************************************************************* Computation of nodes and weights for a Gauss-Radau quadrature formula The algorithm generates the N-point Gauss-Radau quadrature formula with weight function given by the coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – array[0..N-2], alpha coefficients. Beta – array[0..N-1], beta coefficients Zero-indexed element is not used. Beta[I]>0 Mu0 – zeroth moment of the weighting function. A – left boundary of the integration interval. N – number of nodes of the quadrature formula, N>=2 (including the left boundary node). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/void gqgenerategaussradaurec(ap::real_1d_array alpha, ap::real_1d_array beta, double mu0, double a, int n, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
gqgeneraterec
function/************************************************************************* Computation of nodes and weights for a Gauss quadrature formula The algorithm generates the N-point Gauss quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – array[0..N-1], alpha coefficients Beta – array[0..N-1], beta coefficients Zero-indexed element is not used and may be arbitrary. Beta[I]>0. Mu0 – zeroth moment of the weight function. N – number of nodes of the quadrature formula, N>=1 OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/void gqgeneraterec(const ap::real_1d_array& alpha, const ap::real_1d_array& beta, double mu0, int n, int& info, ap::real_1d_array& x, ap::real_1d_array& w);
hermite
unithermitecalculate
function/************************************************************************* Calculation of the value of the Hermite polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial Hn at x *************************************************************************/double hermitecalculate(const int& n, const double& x);
hermitecoefficients
function/************************************************************************* Representation of Hn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void hermitecoefficients(const int& n, ap::real_1d_array& c);
hermitesum
function/************************************************************************* Summation of Hermite polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*H0(x) + c[1]*H1(x) + ... + c[N]*HN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial at x *************************************************************************/double hermitesum(const ap::real_1d_array& c, const int& n, const double& x);
hqrnd
unithqrndstate
structure/************************************************************************* Portable high quality random number generator state. Initialized with HQRNDRandomize() or HQRNDSeed(). Fields: S1, S2 - seed values V - precomputed value MagicV - 'magic' value used to determine whether State structure was correctly initialized. *************************************************************************/struct hqrndstate { int s1; int s2; double v; int magicv; };
hqrndexponential
function/************************************************************************* Random number generator: exponential distribution State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 11.08.2007 by Bochkanov Sergey *************************************************************************/double hqrndexponential(double lambda, hqrndstate& state);
hqrndnormal
function/************************************************************************* Random number generator: normal numbers This function generates one random number from normal distribution. Its performance is equal to that of HQRNDNormal2() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double hqrndnormal(hqrndstate& state);
hqrndnormal2
function/************************************************************************* Random number generator: normal numbers This function generates two independent random numbers from normal distribution. Its performance is equal to that of HQRNDNormal() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void hqrndnormal2(hqrndstate& state, double& x1, double& x2);
hqrndrandomize
function/************************************************************************* HQRNDState initialization with random values which come from standard RNG. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void hqrndrandomize(hqrndstate& state);
hqrndseed
function/************************************************************************* HQRNDState initialization with seed values -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void hqrndseed(int s1, int s2, hqrndstate& state);
hqrnduniformi
function/************************************************************************* This function generates random integer number in [0, N) 1. N must be less than HQRNDMax-1. 2. State structure must be initialized with HQRNDRandomize() or HQRNDSeed() -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/int hqrnduniformi(int n, hqrndstate& state);
hqrnduniformr
function/************************************************************************* This function generates random real number in (0,1), not including interval boundaries State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double hqrnduniformr(hqrndstate& state);
hqrndunit2
function/************************************************************************* Random number generator: random X and Y such that X^2+Y^2=1 State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void hqrndunit2(hqrndstate& state, double& x, double& y);
ibetaf
unitincompletebeta
function/************************************************************************* Incomplete beta integral Returns incomplete beta integral of the arguments, evaluated from zero to x. The function is defined as x - - | (a+b) | | a-1 b-1 ----------- | t (1-t) dt. - - | | | (a) | (b) - 0 The domain of definition is 0 <= x <= 1. In this implementation a and b are restricted to positive values. The integral from x to 1 may be obtained by the symmetry relation 1 - incbet( a, b, x ) = incbet( b, a, 1-x ). The integral is evaluated by a continued fraction expansion or, when b*x is small, by a power series. ACCURACY: Tested at uniformly distributed random points (a,b,x) with a and b in "domain" and x between 0 and 1. Relative error arithmetic domain # trials peak rms IEEE 0,5 10000 6.9e-15 4.5e-16 IEEE 0,85 250000 2.2e-13 1.7e-14 IEEE 0,1000 30000 5.3e-12 6.3e-13 IEEE 0,10000 250000 9.3e-11 7.1e-12 IEEE 0,100000 10000 8.7e-10 4.8e-11 Outputs smaller than the IEEE gradual underflow threshold were excluded from these statistics. Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1995, 2000 by Stephen L. Moshier *************************************************************************/double incompletebeta(double a, double b, double x);
invincompletebeta
function/************************************************************************* Inverse of imcomplete beta integral Given y, the function finds x such that incbet( a, b, x ) = y . The routine performs interval halving or Newton iterations to find the root of incbet(a,b,x) - y = 0. ACCURACY: Relative error: x a,b arithmetic domain domain # trials peak rms IEEE 0,1 .5,10000 50000 5.8e-12 1.3e-13 IEEE 0,1 .25,100 100000 1.8e-13 3.9e-15 IEEE 0,1 0,5 50000 1.1e-12 5.5e-15 With a and b constrained to half-integer or integer values: IEEE 0,1 .5,10000 50000 5.8e-12 1.1e-13 IEEE 0,1 .5,100 100000 1.7e-14 7.9e-16 With a = .5, b constrained to half-integer or integer values: IEEE 0,1 .5,10000 10000 8.3e-11 1.0e-11 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1996, 2000 by Stephen L. Moshier *************************************************************************/double invincompletebeta(double a, double b, double y);
idwint
unitidwinterpolant
structure/************************************************************************* IDW interpolant. *************************************************************************/struct idwinterpolant { int n; int nx; int d; double r; int nw; kdtree tree; int modeltype; ap::real_2d_array q; ap::real_1d_array xbuf; ap::integer_1d_array tbuf; ap::real_1d_array rbuf; ap::real_2d_array xybuf; int debugsolverfailures; double debugworstrcond; double debugbestrcond; };
idwbuildmodifiedshepard
function/************************************************************************* IDW interpolant using modified Shepard method for uniform point distributions. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. D - nodal function type, either: * 0 constant model. Just for demonstration only, worst model ever. * 1 linear model, least squares fitting. Simpe model for datasets too small for quadratic models * 2 quadratic model, least squares fitting. Best model available (if your dataset is large enough). * -1 "fast" linear model, use with caution!!! It is significantly faster than linear/quadratic and better than constant model. But it is less robust (especially in the presence of noise). NQ - number of points used to calculate nodal functions (ignored for constant models). NQ should be LARGER than: * max(1.5*(1+NX),2^NX+1) for linear model, * max(3/4*(NX+2)*(NX+1),2^NX+1) for quadratic model. Values less than this threshold will be silently increased. NW - number of points used to calculate weights and to interpolate. Required: >=2^NX+1, values less than this threshold will be silently increased. Recommended value: about 2*NQ OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * best results are obtained with quadratic models, worst - with constant models * when N is large, NQ and NW must be significantly smaller than N both to obtain optimal performance and to obtain optimal accuracy. In 2 or 3-dimensional tasks NQ=15 and NW=25 are good values to start with. * NQ and NW may be greater than N. In such cases they will be automatically decreased. * this subroutine is always succeeds (as long as correct parameters are passed). * see 'Multivariate Interpolation of Large Sets of Scattered Data' by Robert J. Renka for more information on this algorithm. * this subroutine assumes that point distribution is uniform at the small scales. If it isn't - for example, points are concentrated along "lines", but "lines" distribution is uniform at the larger scale - then you should use IDWBuildModifiedShepardR() -- ALGLIB PROJECT -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/void idwbuildmodifiedshepard(const ap::real_2d_array& xy, int n, int nx, int d, int nq, int nw, idwinterpolant& z);
idwbuildmodifiedshepardr
function/************************************************************************* IDW interpolant using modified Shepard method for non-uniform datasets. This type of model uses constant nodal functions and interpolates using all nodes which are closer than user-specified radius R. It may be used when points distribution is non-uniform at the small scale, but it is at the distances as large as R. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. R - radius, R>0 OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * if there is less than IDWKMin points within R-ball, algorithm selects IDWKMin closest ones, so that continuity properties of interpolant are preserved even far from points. -- ALGLIB PROJECT -- Copyright 11.04.2010 by Bochkanov Sergey *************************************************************************/void idwbuildmodifiedshepardr(const ap::real_2d_array& xy, int n, int nx, double r, idwinterpolant& z);
idwbuildnoisy
function/************************************************************************* IDW model for noisy data. This subroutine may be used to handle noisy data, i.e. data with noise in OUTPUT values. It differs from IDWBuildModifiedShepard() in the following aspects: * nodal functions are not constrained to pass through nodes: Qi(xi)<>yi, i.e. we have fitting instead of interpolation. * weights which are used during least squares fitting stage are all equal to 1.0 (independently of distance) * "fast"-linear or constant nodal functions are not supported (either not robust enough or too rigid) This problem require far more complex tuning than interpolation problems. Below you can find some recommendations regarding this problem: * focus on tuning NQ; it controls noise reduction. As for NW, you can just make it equal to 2*NQ. * you can use cross-validation to determine optimal NQ. * optimal NQ is a result of complex tradeoff between noise level (more noise = larger NQ required) and underlying function complexity (given fixed N, larger NQ means smoothing of compex features in the data). For example, NQ=N will reduce noise to the minimum level possible, but you will end up with just constant/linear/quadratic (depending on D) least squares model for the whole dataset. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. D - nodal function degree, either: * 1 linear model, least squares fitting. Simpe model for datasets too small for quadratic models (or for very noisy problems). * 2 quadratic model, least squares fitting. Best model available (if your dataset is large enough). NQ - number of points used to calculate nodal functions. NQ should be significantly larger than 1.5 times the number of coefficients in a nodal function to overcome effects of noise: * larger than 1.5*(1+NX) for linear model, * larger than 3/4*(NX+2)*(NX+1) for quadratic model. Values less than this threshold will be silently increased. NW - number of points used to calculate weights and to interpolate. Required: >=2^NX+1, values less than this threshold will be silently increased. Recommended value: about 2*NQ or larger OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * best results are obtained with quadratic models, linear models are not recommended to use unless you are pretty sure that it is what you want * this subroutine is always succeeds (as long as correct parameters are passed). * see 'Multivariate Interpolation of Large Sets of Scattered Data' by Robert J. Renka for more information on this algorithm. -- ALGLIB PROJECT -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/void idwbuildnoisy(const ap::real_2d_array& xy, int n, int nx, int d, int nq, int nw, idwinterpolant& z);
idwcalc
function/************************************************************************* IDW interpolation INPUT PARAMETERS: Z - IDW interpolant built with one of model building subroutines. X - array[0..NX-1], interpolation point Result: IDW interpolant Z(X) -- ALGLIB -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/double idwcalc(idwinterpolant& z, const ap::real_1d_array& x);
igammaf
unitincompletegamma
function/************************************************************************* Incomplete gamma integral The function is defined by x - 1 | | -t a-1 igam(a,x) = ----- | e t dt. - | | | (a) - 0 In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 200000 3.6e-14 2.9e-15 IEEE 0,100 300000 9.9e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/double incompletegamma(double a, double x);
incompletegammac
function/************************************************************************* Complemented incomplete gamma integral The function is defined by igamc(a,x) = 1 - igam(a,x) inf. - 1 | | -t a-1 = ----- | e t dt. - | | | (a) - x In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Tested at random a, x. a x Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,100 200000 1.9e-14 1.7e-15 IEEE 0.01,0.5 0,100 200000 1.4e-13 1.6e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/double incompletegammac(double a, double x);
invincompletegammac
function/************************************************************************* Inverse of complemented imcomplete gamma integral Given p, the function finds x such that igamc( a, x ) = p. Starting with the approximate value 3 x = a t where t = 1 - d - ndtri(p) sqrt(d) and d = 1/9a, the routine performs up to 10 Newton iterations to find the root of igamc(a,x) - p = 0. ACCURACY: Tested at random a, p in the intervals indicated. a p Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,0.5 100000 1.0e-14 1.7e-15 IEEE 0.01,0.5 0,0.5 100000 9.0e-14 3.4e-15 IEEE 0.5,10000 0,0.5 20000 2.3e-13 3.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double invincompletegammac(double a, double y0);
inverseupdate
unitrmatrixinvupdatecolumn
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a column of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdColumn - the column of A whose vector U was added. 0 <= UpdColumn <= N-1 U - the vector to be added to a column. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void rmatrixinvupdatecolumn(ap::real_2d_array& inva, int n, int updcolumn, const ap::real_1d_array& u);
rmatrixinvupdaterow
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a row of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - the row of A whose vector V was added. 0 <= Row <= N-1 V - the vector to be added to a row. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void rmatrixinvupdaterow(ap::real_2d_array& inva, int n, int updrow, const ap::real_1d_array& v);
rmatrixinvupdatesimple
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a number to an element of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - row where the element to be updated is stored. UpdColumn - column where the element to be updated is stored. UpdVal - a number to be added to the element. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void rmatrixinvupdatesimple(ap::real_2d_array& inva, int n, int updrow, int updcolumn, double updval);
rmatrixinvupdateuv
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm computes the inverse of matrix A+u*v’ by using the given matrix A^-1 and the vectors u and v. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. U - the vector modifying the matrix. Array whose index ranges within [0..N-1]. V - the vector modifying the matrix. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of matrix A + u*v'. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void rmatrixinvupdateuv(ap::real_2d_array& inva, int n, const ap::real_1d_array& u, const ap::real_1d_array& v);
jacobianelliptic
unitjacobianellipticfunctions
function/************************************************************************* Jacobian Elliptic Functions Evaluates the Jacobian elliptic functions sn(u|m), cn(u|m), and dn(u|m) of parameter m between 0 and 1, and real argument u. These functions are periodic, with quarter-period on the real axis equal to the complete elliptic integral ellpk(1.0-m). Relation to incomplete elliptic integral: If u = ellik(phi,m), then sn(u|m) = sin(phi), and cn(u|m) = cos(phi). Phi is called the amplitude of u. Computation is by means of the arithmetic-geometric mean algorithm, except when m is within 1e-9 of 0 or 1. In the latter case with m close to 1, the approximation applies only for phi < pi/2. ACCURACY: Tested at random points with u between 0 and 10, m between 0 and 1. Absolute error (* = relative error): arithmetic function # trials peak rms IEEE phi 10000 9.2e-16* 1.4e-16* IEEE sn 50000 4.1e-15 4.6e-16 IEEE cn 40000 3.6e-15 4.4e-16 IEEE dn 10000 1.3e-12 1.8e-14 Peak error observed in consistency check using addition theorem for sn(u+v) was 4e-16 (absolute). Also tested by the above relation to the incomplete elliptic integral. Accuracy deteriorates when u is large. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/void jacobianellipticfunctions(double u, double m, double& sn, double& cn, double& dn, double& ph);
jarquebera
unitjarqueberatest
function/************************************************************************* Jarque-Bera test This test checks hypotheses about the fact that a given sample X is a sample of normal random variable. Requirements: * the number of elements in the sample is not less than 5. Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. Accuracy of the approximation used (5<=N<=1951): p-value relative error (5<=N<=1951) [1, 0.1] < 1% [0.1, 0.01] < 2% [0.01, 0.001] < 6% [0.001, 0] wasn't measured For N>1951 accuracy wasn't measured but it shouldn't be sharply different from table values. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void jarqueberatest(const ap::real_1d_array& x, int n, double& p);
kmeans
unitkmeansgenerate
function/************************************************************************* k-means++ clusterization INPUT PARAMETERS: XY - dataset, array [0..NPoints-1,0..NVars-1]. NPoints - dataset size, NPoints>=K NVars - number of variables, NVars>=1 K - desired number of clusters, K>=1 Restarts - number of restarts, Restarts>=1 OUTPUT PARAMETERS: Info - return code: * -3, if task is degenerate (number of distinct points is less than K) * -1, if incorrect NPoints/NFeatures/K/Restarts was passed * 1, if subroutine finished successfully C - array[0..NVars-1,0..K-1].matrix whose columns store cluster's centers XYC - array which contains number of clusters dataset points belong to. -- ALGLIB -- Copyright 21.03.2009 by Bochkanov Sergey *************************************************************************/void kmeansgenerate(const ap::real_2d_array& xy, int npoints, int nvars, int k, int restarts, int& info, ap::real_2d_array& c, ap::integer_1d_array& xyc);
laguerre
unitlaguerrecalculate
function/************************************************************************* Calculation of the value of the Laguerre polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial Ln at x *************************************************************************/double laguerrecalculate(const int& n, const double& x);
laguerrecoefficients
function/************************************************************************* Representation of Ln as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void laguerrecoefficients(const int& n, ap::real_1d_array& c);
laguerresum
function/************************************************************************* Summation of Laguerre polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*L0(x) + c[1]*L1(x) + ... + c[N]*LN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial at x *************************************************************************/double laguerresum(const ap::real_1d_array& c, const int& n, const double& x);
lda
unitfisherlda
function/************************************************************************* Multiclass Fisher LDA Subroutine finds coefficients of linear combination which optimally separates training set on classes. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars]. First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -4, if internal EVD subroutine hasn't converged * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, NVars<1, NClasses<2) * 1, if task has been solved * 2, if there was a multicollinearity in training set, but task has been solved. W - linear combination coefficients, array[0..NVars-1] -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/void fisherlda(const ap::real_2d_array& xy, int npoints, int nvars, int nclasses, int& info, ap::real_1d_array& w);
fisherldan
function/************************************************************************* N-dimensional multiclass Fisher LDA Subroutine finds coefficients of linear combinations which optimally separates training set on classes. It returns N-dimensional basis whose vector are sorted by quality of training set separation (in descending order). INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars]. First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -4, if internal EVD subroutine hasn't converged * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, NVars<1, NClasses<2) * 1, if task has been solved * 2, if there was a multicollinearity in training set, but task has been solved. W - basis, array[0..NVars-1,0..NVars-1] columns of matrix stores basis vectors, sorted by quality of training set separation (in descending order) -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/void fisherldan(const ap::real_2d_array& xy, int npoints, int nvars, int nclasses, int& info, ap::real_2d_array& w);
ldlt
unitsmatrixldlt
function/************************************************************************* LDLTDecomposition of a symmetric matrix The algorithm represents a symmetric matrix (which is not necessarily positive definite) as A=L*D*L' or A = U*D*U', where D is a block-diagonal matrix with blocks 1x1 or 2x2, matrix L (matrix U) is a product of lower (upper) triangular matrices with unit diagonal and permutation matrices. Input parameters: A - factorized matrix, array with elements [0..N-1, 0..N-1]. If IsUpper – True, then the upper triangle contains elements of symmetric matrix A, and the lower triangle is not used. The same applies if IsUpper = False. N - size of factorized matrix. IsUpper - parameter which shows a method of matrix definition (lower or upper triangle). Output parameters: A - matrices D and U, if IsUpper = True, or L, if IsUpper = False, in compact form, replacing the upper (lower) triangle of matrix A. In that case, the elements under (over) the main diagonal are not used nor modified. Pivots - tables of performed permutations (see below). If IsUpper = True, then A = U*D*U', U = P(n)*U(n)*...*P(k)*U(k), where P(k) is the permutation matrix, U(k) - upper triangular matrix with its unit main diagonal and k decreases from n with step s which is equal to 1 or 2 (according to the size of the blocks of matrix D). ( I v 0 ) k-s+1 U(k) = ( 0 I 0 ) s ( 0 0 I ) n-k-1 k-s+1 s n-k-1 If Pivots[k]>=0, then s=1, P(k) - permutation of rows k and Pivots[k], the vectorv forming matrix U(k) is stored in elements A(0:k-1,k), D(k) replaces A(k,k). If Pivots[k]=Pivots[k-1]<0 then s=2, P(k) - permutation of rows k-1 and N+Pivots[k-1], the vector v forming matrix U(k) is stored in elements A(0:k-1,k:k+1), the upper triangle of block D(k) is stored in A(k,k), A(k,k+1) and A(k+1,k+1). If IsUpper = False, then A = L*D*L', L=P(0)*L(0)*...*P(k)*L(k), where P(k) is the permutation matrix, L(k) – lower triangular matrix with unit main diagonal and k decreases from 1 with step s which is equal to 1 or 2 (according to the size of the blocks of matrix D). ( I 0 0 ) k-1 L(k) = ( 0 I 0 ) s ( 0 v I ) n-k-s+1 k-1 s n-k-s+1 If Pivots[k]>=0 then s=1, P(k) – permutation of rows k and Pivots[k], the vector v forming matrix L(k) is stored in elements A(k+1:n-1,k), D(k) replaces A(k,k). If Pivots[k]=Pivots[k+1]<0 then s=2, P(k) - permutation of rows k+1 and N+Pivots[k+1], the vector v forming matrix L(k) is stored in elements A(k+2:n-1,k:k+1), the lower triangle of block D(k) is stored in A(k,k), A(k+1,k) and A(k+1,k+1). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University June 30, 1999 *************************************************************************/void smatrixldlt(ap::real_2d_array& a, int n, bool isupper, ap::integer_1d_array& pivots);
legendre
unitlegendrecalculate
function/************************************************************************* Calculation of the value of the Legendre polynomial Pn. Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial Pn at x *************************************************************************/double legendrecalculate(const int& n, const double& x);
legendrecoefficients
function/************************************************************************* Representation of Pn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void legendrecoefficients(const int& n, ap::real_1d_array& c);
legendresum
function/************************************************************************* Summation of Legendre polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*P0(x) + c[1]*P1(x) + ... + c[N]*PN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial at x *************************************************************************/double legendresum(const ap::real_1d_array& c, const int& n, const double& x);
linreg
unitlrreport
structure/************************************************************************* LRReport structure contains additional information about linear model: * C - covariation matrix, array[0..NVars,0..NVars]. C[i,j] = Cov(A[i],A[j]) * RMSError - root mean square error on a training set * AvgError - average error on a training set * AvgRelError - average relative error on a training set (excluding observations with zero function value). * CVRMSError - leave-one-out cross-validation estimate of generalization error. Calculated using fast algorithm with O(NVars*NPoints) complexity. * CVAvgError - cross-validation estimate of average error * CVAvgRelError - cross-validation estimate of average relative error All other fields of the structure are intended for internal use and should not be used outside ALGLIB. *************************************************************************/struct lrreport { ap::real_2d_array c; double rmserror; double avgerror; double avgrelerror; double cvrmserror; double cvavgerror; double cvavgrelerror; int ncvdefects; ap::integer_1d_array cvdefects; };
lravgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double lravgerror(const linearmodel& lm, const ap::real_2d_array& xy, int npoints);
lravgrelerror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average relative error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double lravgrelerror(const linearmodel& lm, const ap::real_2d_array& xy, int npoints);
lrbuild
function/************************************************************************* Linear regression Subroutine builds model: Y = A(0)*X[0] + ... + A(N-1)*X[N-1] + A(N) and model found in ALGLIB format, covariation matrix, training set errors (rms, average, average relative) and leave-one-out cross-validation estimate of the generalization error. CV estimate calculated using fast algorithm with O(NPoints*NVars) complexity. When covariation matrix is calculated standard deviations of function values are assumed to be equal to RMS error on the training set. INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: Info - return code: * -255, in case of unknown internal error * -4, if internal SVD subroutine haven't converged * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1). * 1, if subroutine successfully finished LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. AR - additional results -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/void lrbuild(const ap::real_2d_array& xy, int npoints, int nvars, int& info, linearmodel& lm, lrreport& ar);
lrbuilds
function/************************************************************************* Linear regression Variant of LRBuild which uses vector of standatd deviations (errors in function values). INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable S - standard deviations (errors in function values) array[0..NPoints-1], S[i]>0. NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: Info - return code: * -255, in case of unknown internal error * -4, if internal SVD subroutine haven't converged * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1). * -2, if S[I]<=0 * 1, if subroutine successfully finished LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. AR - additional results -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/void lrbuilds(const ap::real_2d_array& xy, const ap::real_1d_array& s, int npoints, int nvars, int& info, linearmodel& lm, lrreport& ar);
lrbuildz
function/************************************************************************* Like LRBuild but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/void lrbuildz(const ap::real_2d_array& xy, int npoints, int nvars, int& info, linearmodel& lm, lrreport& ar);
lrbuildzs
function/************************************************************************* Like LRBuildS, but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/void lrbuildzs(const ap::real_2d_array& xy, const ap::real_1d_array& s, int npoints, int nvars, int& info, linearmodel& lm, lrreport& ar);
lrcopy
function/************************************************************************* Copying of LinearModel strucure INPUT PARAMETERS: LM1 - original OUTPUT PARAMETERS: LM2 - copy -- ALGLIB -- Copyright 15.03.2009 by Bochkanov Sergey *************************************************************************/void lrcopy(const linearmodel& lm1, linearmodel& lm2);
lrpack
function/************************************************************************* "Packs" coefficients and creates linear model in ALGLIB format (LRUnpack reversed). INPUT PARAMETERS: V - coefficients, array[0..NVars] NVars - number of independent variables OUTPUT PAREMETERS: LM - linear model. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/void lrpack(const ap::real_1d_array& v, int nvars, linearmodel& lm);
lrprocess
function/************************************************************************* Procesing INPUT PARAMETERS: LM - linear model X - input vector, array[0..NVars-1]. Result: value of linear model regression estimate -- ALGLIB -- Copyright 03.09.2008 by Bochkanov Sergey *************************************************************************/double lrprocess(const linearmodel& lm, const ap::real_1d_array& x);
lrrmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: root mean square error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double lrrmserror(const linearmodel& lm, const ap::real_2d_array& xy, int npoints);
lrserialize
function/************************************************************************* Serialization of LinearModel strucure INPUT PARAMETERS: LM - original OUTPUT PARAMETERS: RA - array of real numbers which stores model, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 15.03.2009 by Bochkanov Sergey *************************************************************************/void lrserialize(const linearmodel& lm, ap::real_1d_array& ra, int& rlen);
lrunpack
function/************************************************************************* Unpacks coefficients of linear model. INPUT PARAMETERS: LM - linear model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NVars] NVars - number of independent variables (one less than number of coefficients) -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/void lrunpack(const linearmodel& lm, ap::real_1d_array& v, int& nvars);
lrunserialize
function/************************************************************************* Unserialization of DecisionForest strucure INPUT PARAMETERS: RA - real array which stores decision forest OUTPUT PARAMETERS: LM - unserialized structure -- ALGLIB -- Copyright 15.03.2009 by Bochkanov Sergey *************************************************************************/void lrunserialize(const ap::real_1d_array& ra, linearmodel& lm);
logit
unitmnlreport
structure/************************************************************************* MNLReport structure contains information about training process: * NGrad - number of gradient calculations * NHess - number of Hessian calculations *************************************************************************/struct mnlreport { int ngrad; int nhess; };
mnlavgce
function/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*ln(2)). -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/double mnlavgce(logitmodel& lm, const ap::real_2d_array& xy, int npoints);
mnlavgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double mnlavgerror(logitmodel& lm, const ap::real_2d_array& xy, int npoints);
mnlavgrelerror
function/************************************************************************* Average relative error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average relative error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double mnlavgrelerror(logitmodel& lm, const ap::real_2d_array& xy, int ssize);
mnlclserror
function/************************************************************************* Classification error on test set = MNLRelClsError*NPoints -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/int mnlclserror(logitmodel& lm, const ap::real_2d_array& xy, int npoints);
mnlcopy
function/************************************************************************* Copying of LogitModel strucure INPUT PARAMETERS: LM1 - original OUTPUT PARAMETERS: LM2 - copy -- ALGLIB -- Copyright 15.03.2009 by Bochkanov Sergey *************************************************************************/void mnlcopy(const logitmodel& lm1, logitmodel& lm2);
mnlpack
function/************************************************************************* "Packs" coefficients and creates logit model in ALGLIB format (MNLUnpack reversed). INPUT PARAMETERS: A - model (see MNLUnpack) NVars - number of independent variables NClasses - number of classes OUTPUT PARAMETERS: LM - logit model. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void mnlpack(const ap::real_2d_array& a, int nvars, int nclasses, logitmodel& lm);
mnlprocess
function/************************************************************************* Procesing INPUT PARAMETERS: LM - logit model, passed by non-constant reference (some fields of structure are used as temporaries when calculating model output). X - input vector, array[0..NVars-1]. OUTPUT PARAMETERS: Y - result, array[0..NClasses-1] Vector of posterior probabilities for classification task. Subroutine does not allocate memory for this vector, it is responsibility of a caller to allocate it. Array must be at least [0..NClasses-1]. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void mnlprocess(logitmodel& lm, const ap::real_1d_array& x, ap::real_1d_array& y);
mnlrelclserror
function/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/double mnlrelclserror(logitmodel& lm, const ap::real_2d_array& xy, int npoints);
mnlrmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: root mean square error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double mnlrmserror(logitmodel& lm, const ap::real_2d_array& xy, int npoints);
mnlserialize
function/************************************************************************* Serialization of LogitModel strucure INPUT PARAMETERS: LM - original OUTPUT PARAMETERS: RA - array of real numbers which stores model, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 15.03.2009 by Bochkanov Sergey *************************************************************************/void mnlserialize(const logitmodel& lm, ap::real_1d_array& ra, int& rlen);
mnltrainh
function/************************************************************************* This subroutine trains logit model. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars] First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1, NClasses<2). * 1, if task has been solved LM - model built Rep - training report -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void mnltrainh(const ap::real_2d_array& xy, int npoints, int nvars, int nclasses, int& info, logitmodel& lm, mnlreport& rep);
mnlunpack
function/************************************************************************* Unpacks coefficients of logit model. Logit model have form: P(class=i) = S(i) / (S(0) + S(1) + ... +S(M-1)) S(i) = Exp(A[i,0]*X[0] + ... + A[i,N-1]*X[N-1] + A[i,N]), when i<M-1 S(M-1) = 1 INPUT PARAMETERS: LM - logit model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NClasses-2,0..NVars] NVars - number of independent variables NClasses - number of classes -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void mnlunpack(const logitmodel& lm, ap::real_2d_array& a, int& nvars, int& nclasses);
mnlunserialize
function/************************************************************************* Unserialization of LogitModel strucure INPUT PARAMETERS: RA - real array which stores model OUTPUT PARAMETERS: LM - restored model -- ALGLIB -- Copyright 15.03.2009 by Bochkanov Sergey *************************************************************************/void mnlunserialize(const ap::real_1d_array& ra, logitmodel& lm);
lsfit
unitlsfitreport
structure/************************************************************************* Least squares fitting report: TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/struct lsfitreport { double taskrcond; double rmserror; double avgerror; double avgrelerror; double maxerror; };
lsfitlinear
function/************************************************************************* Linear least squares fitting, without weights. See LSFitLinearW for more information. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitlinear(const ap::real_1d_array& y, const ap::real_2d_array& fmatrix, int n, int m, int& info, ap::real_1d_array& c, lsfitreport& rep);
Examples: lsfit_linear
lsfitlinearc
function/************************************************************************* Constained linear least squares fitting, without weights. See LSFitLinearWC() for more information. -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/void lsfitlinearc(ap::real_1d_array y, const ap::real_2d_array& fmatrix, const ap::real_2d_array& cmatrix, int n, int m, int k, int& info, ap::real_1d_array& c, lsfitreport& rep);
Examples: lsfit_linear
lsfitlinearw
function/************************************************************************* Weighted linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -1 incorrect N/M were specified * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TaskRCond reciprocal of condition number * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED SEE ALSO LSFitLinear LSFitLinearC LSFitLinearWC -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitlinearw(const ap::real_1d_array& y, const ap::real_1d_array& w, const ap::real_2d_array& fmatrix, int n, int m, int& info, ap::real_1d_array& c, lsfitreport& rep);
Examples: lsfit_linear
lsfitlinearwc
function/************************************************************************* Weighted constained linear least squares fitting. This is variation of LSFitLinearW(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinearW() is called. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -3 either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints were specified. * -1 incorrect N/M/K were specified * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. SEE ALSO LSFitLinear LSFitLinearC LSFitLinearWC -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/void lsfitlinearwc(ap::real_1d_array y, const ap::real_1d_array& w, const ap::real_2d_array& fmatrix, ap::real_2d_array cmatrix, int n, int m, int k, int& info, ap::real_1d_array& c, lsfitreport& rep);
Examples: lsfit_linear
lsfitnonlinearfg
function/************************************************************************* Nonlinear least squares fitting, no individual weights. See LSFitNonlinearWFG for more information. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearfg(const ap::real_2d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& c, int n, int m, int k, bool cheapfg, lsfitstate& state);
Examples: lsfit_nonlinear
lsfitnonlinearfgh
function/************************************************************************* Nonlinear least squares fitting using gradient/Hessian without individual weights. See LSFitNonlinearWFGH() for more information. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearfgh(const ap::real_2d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& c, int n, int m, int k, lsfitstate& state);
Examples: lsfit_nonlinear2
lsfitnonlineariteration
function/************************************************************************* Nonlinear least squares fitting. Algorithm iteration. Called after inialization of the State structure with LSFitNonlinearXXX() subroutine. See HTML docs for examples. INPUT PARAMETERS: State - structure which stores algorithm state between subsequent calls and which is used for reverse communication. Must be initialized with LSFitNonlinearXXX() call first. RESULT 1. If subroutine returned False, iterative algorithm has converged. 2. If subroutine returned True, then if: * if State.NeedF=True, function value F(X,C) is required * if State.NeedFG=True, function value F(X,C) and gradient dF/dC(X,C) are required * if State.NeedFGH=True function value F(X,C), gradient dF/dC(X,C) and Hessian are required One and only one of this fields can be set at time. Function, its gradient and Hessian are calculated at (X,C), where X is stored in State.X[0..M-1] and C is stored in State.C[0..K-1]. Results are stored: * function value - in State.F * gradient - in State.G[0..K-1] * Hessian - in State.H[0..K-1,0..K-1] -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/bool lsfitnonlineariteration(lsfitstate& state);
Examples: lsfit_nonlinear lsfit_nonlinear2
lsfitnonlinearresults
function/************************************************************************* Nonlinear least squares fitting results. Called after LSFitNonlinearIteration() returned False. INPUT PARAMETERS: State - algorithm state (used by LSFitNonlinearIteration). OUTPUT PARAMETERS: Info - completetion code: * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken C - array[0..K-1], solution Rep - optimization report. Following fields are set: * Rep.TerminationType completetion code: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearresults(const lsfitstate& state, int& info, ap::real_1d_array& c, lsfitreport& rep);
Examples: lsfit_nonlinear lsfit_nonlinear2
lsfitnonlinearsetcond
function/************************************************************************* Stopping conditions for nonlinear least squares fitting. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with LSFitNonLinearCreate???() EpsF - stopping criterion. Algorithm stops if |F(k+1)-F(k)| <= EpsF*max{|F(k)|, |F(k+1)|, 1} EpsX - stopping criterion. Algorithm stops if |X(k+1)-X(k)| <= EpsX*(1+|X(k)|) MaxIts - stopping criterion. Algorithm stops after MaxIts iterations. MaxIts=0 means no stopping criterion. NOTE Passing EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (according to the scheme used by MINLM unit). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearsetcond(lsfitstate& state, double epsf, double epsx, int maxits);
lsfitnonlinearsetstpmax
function/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with LSFitNonLinearCreate???() StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearsetstpmax(lsfitstate& state, double stpmax);
lsfitnonlinearwfg
function/************************************************************************* Weighted nonlinear least squares fitting using gradient and Hessian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(x[0],c)-y[0]))^2 + ... + (w[n-1]*(f(x[n-1],c)-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(x[i],c) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted CheapFG - boolean flag, which is: * True if both function and gradient calculation complexity are less than O(M^2). An improved algorithm can be used which corresponds to FGJ scheme from MINLM unit. * False otherwise. Standard Jacibian-bases Levenberg-Marquardt algo will be used (FJ scheme). OUTPUT PARAMETERS: State - structure which stores algorithm state between subsequent calls of LSFitNonlinearIteration. Used for reverse communication. This structure should be passed to LSFitNonlinearIteration subroutine. See also: LSFitNonlinearIteration LSFitNonlinearResults LSFitNonlinearFG (fitting without weights) LSFitNonlinearWFGH (fitting using Hessian) LSFitNonlinearFGH (fitting using Hessian, without weights) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearwfg(const ap::real_2d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& w, const ap::real_1d_array& c, int n, int m, int k, bool cheapfg, lsfitstate& state);
Examples: lsfit_nonlinear
lsfitnonlinearwfgh
function/************************************************************************* Weighted nonlinear least squares fitting using gradient/Hessian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(x[0],c)-y[0]))^2 + ... + (w[n-1]*(f(x[n-1],c)-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses f(x[i],c), its gradient and its Hessian. See LSFitNonlinearWFG() subroutine for information about function parameters. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitnonlinearwfgh(const ap::real_2d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& w, const ap::real_1d_array& c, int n, int m, int k, lsfitstate& state);
Examples: lsfit_nonlinear2
int m; int n; ap::real_1d_array y; ap::real_2d_array fmatrix; ap::real_2d_array cmatrix; lsfitreport rep; int info; ap::real_1d_array c; int i; int j; double x; double a; double b; printf("\n\nFitting tan(x) by third degree polynomial\n\n"); printf("Fit type rms.err max.err p(0) dp(0)\n"); // // Fitting tan(x) at [0, 0.4*pi] by third degree polynomial: // a) without constraints // b) constrained at x=0: p(0)=0 // c) constrained at x=0: p'(0)=1 // c) constrained at x=0: p(0)=0, p'(0)=1 // m = 4; n = 100; a = 0; b = 0.4*ap::pi(); // // Prepare task matrix // y.setlength(n); fmatrix.setlength(n, m); for(i = 0; i <= n-1; i++) { x = a+(b-a)*i/(n-1); y(i) = tan(x); fmatrix(i,0) = 1.0; for(j = 1; j <= m-1; j++) { fmatrix(i,j) = x*fmatrix(i,j-1); } } // // Solve unconstrained task // lsfitlinear(y, fmatrix, n, m, info, c, rep); printf("Unconstrained %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(c(0)), double(c(1))); // // Solve constrained task, p(0)=0 // Prepare constraints matrix: // * first M columns store values of basis functions at X=0 // * last column stores zero (desired value at X=0) // cmatrix.setlength(1, m+1); cmatrix(0,0) = 1; for(i = 1; i <= m-1; i++) { cmatrix(0,i) = 0; } cmatrix(0,m) = 0; lsfitlinearc(y, fmatrix, cmatrix, n, m, 1, info, c, rep); printf("Constrained, p(0)=0 %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(c(0)), double(c(1))); // // Solve constrained task, p'(0)=0 // Prepare constraints matrix: // * first M columns store derivatives of basis functions at X=0 // * last column stores 1.0 (desired derivative at X=0) // cmatrix.setlength(1, m+1); for(i = 0; i <= m-1; i++) { cmatrix(0,i) = 0; } cmatrix(0,1) = 1; cmatrix(0,m) = 1; lsfitlinearc(y, fmatrix, cmatrix, n, m, 1, info, c, rep); printf("Constrained, dp(0)=1 %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(c(0)), double(c(1))); // // Solve constrained task, p(0)=0, p'(0)=0 // Prepare constraints matrix: // * first M columns store values/derivatives of basis functions at X=0 // * last column stores desired values/derivative at X=0 // cmatrix.setlength(2, m+1); cmatrix(0,0) = 1; for(i = 1; i <= m-1; i++) { cmatrix(0,i) = 0; } cmatrix(0,m) = 0; for(i = 0; i <= m-1; i++) { cmatrix(1,i) = 0; } cmatrix(1,1) = 1; cmatrix(1,m) = 1; lsfitlinearc(y, fmatrix, cmatrix, n, m, 2, info, c, rep); printf("Constrained, both %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(c(0)), double(c(1))); printf("\n\n");
int m; int n; int k; ap::real_1d_array y; ap::real_2d_array x; ap::real_1d_array c; lsfitreport rep; lsfitstate state; int info; double epsf; double epsx; int maxits; int i; int j; double a; double b; printf("Fitting 0.5(1+cos(x)) on [-pi,+pi] with exp(-alpha*x^2)\n"); // // Fitting 0.5(1+cos(x)) on [-pi,+pi] with Gaussian exp(-alpha*x^2): // * without Hessian (gradient only) // * using alpha=1 as initial value // * using 1000 uniformly distributed points to fit to // // Notes: // * N - number of points // * M - dimension of space where points reside // * K - number of parameters being fitted // n = 1000; m = 1; k = 1; a = -ap::pi(); b = +ap::pi(); // // Prepare task matrix // y.setlength(n); x.setlength(n, m); c.setlength(k); for(i = 0; i <= n-1; i++) { x(i,0) = a+(b-a)*i/(n-1); y(i) = 0.5*(1+cos(x(i,0))); } c(0) = 1.0; epsf = 0.0; epsx = 0.0001; maxits = 0; // // Solve // lsfitnonlinearfg(x, y, c, n, m, k, true, state); lsfitnonlinearsetcond(state, epsf, epsx, maxits); while(lsfitnonlineariteration(state)) { if( state.needf ) { // // F(x) = Exp(-alpha*x^2) // state.f = exp(-state.c(0)*ap::sqr(state.x(0))); } if( state.needfg ) { // // F(x) = Exp(-alpha*x^2) // dF/dAlpha = (-x^2)*Exp(-alpha*x^2) // state.f = exp(-state.c(0)*ap::sqr(state.x(0))); state.g(0) = -ap::sqr(state.x(0))*state.f; } } lsfitnonlinearresults(state, info, c, rep); printf("alpha: %0.3lf\n", double(c(0))); printf("rms.err: %0.3lf\n", double(rep.rmserror)); printf("max.err: %0.3lf\n", double(rep.maxerror)); printf("Termination type: %0ld\n", long(info)); printf("\n\n");
int m; int n; int k; ap::real_1d_array y; ap::real_2d_array x; ap::real_1d_array c; lsfitreport rep; lsfitstate state; int info; double epsf; double epsx; int maxits; int i; int j; double a; double b; printf("Fitting 1-x^2 on [-1,+1] with cos(alpha*pi*x)+beta\n"); // // Fitting 1-x^2 on [-1,+1] with cos(alpha*pi*x)+beta: // * using Hessian // * using alpha=1 and beta=0 as initial values // * using 1000 uniformly distributed points to fit to // // Notes: // * N - number of points // * M - dimension of space where points reside // * K - number of parameters being fitted // n = 1000; m = 1; k = 2; a = -1; b = +1; // // Prepare task matrix // y.setlength(n); x.setlength(n, m); c.setlength(k); for(i = 0; i <= n-1; i++) { x(i,0) = a+(b-a)*i/(n-1); y(i) = 1-ap::sqr(x(i,0)); } c(0) = 1.0; c(1) = 0.0; epsf = 0.0; epsx = 0.0001; maxits = 0; // // Solve // lsfitnonlinearfgh(x, y, c, n, m, k, state); lsfitnonlinearsetcond(state, epsf, epsx, maxits); while(lsfitnonlineariteration(state)) { // // F(x) = Cos(alpha*pi*x)+beta // state.f = cos(state.c(0)*ap::pi()*state.x(0))+state.c(1); // // F(x) = Cos(alpha*pi*x)+beta // dF/dAlpha = -pi*x*Sin(alpha*pi*x) // dF/dBeta = 1.0 // if( state.needfg||state.needfgh ) { state.g(0) = -ap::pi()*state.x(0)*sin(state.c(0)*ap::pi()*state.x(0)); state.g(1) = 1.0; } // // F(x) = Cos(alpha*pi*x)+beta // d2F/dAlpha2 = -(pi*x)^2*Cos(alpha*pi*x) // d2F/dAlphadBeta = 0 // d2F/dBeta2 = 0 // if( state.needfgh ) { state.h(0,0) = -ap::sqr(ap::pi()*state.x(0))*cos(state.c(0)*ap::pi()*state.x(0)); state.h(0,1) = 0.0; state.h(1,0) = 0.0; state.h(1,1) = 0.0; } } lsfitnonlinearresults(state, info, c, rep); printf("alpha: %0.3lf\n", double(c(0))); printf("beta: %0.3lf\n", double(c(1))); printf("rms.err: %0.3lf\n", double(rep.rmserror)); printf("max.err: %0.3lf\n", double(rep.maxerror)); printf("Termination type: %0ld\n", long(info)); printf("\n\n");
mannwhitneyu
unitmannwhitneyutest
function/************************************************************************* Mann-Whitney U-test This test checks hypotheses about whether X and Y are samples of two continuous distributions of the same shape and same median or whether their medians are different. The following tests are performed: * two-tailed test (null hypothesis - the medians are equal) * left-tailed test (null hypothesis - the median of the first sample is greater than or equal to the median of the second sample) * right-tailed test (null hypothesis - the median of the first sample is less than or equal to the median of the second sample). Requirements: * the samples are independent * X and Y are continuous distributions (or discrete distributions well- approximating continuous distributions) * distributions of X and Y have the same shape. The only possible difference is their position (i.e. the value of the median) * the number of elements in each sample is not less than 5 * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distributions to be normal. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. M>=5 Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with satisfactory accuracy in interval [0.0001, 1]. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. Relative precision of approximation of p-value: N M Max.err. Rms.err. 5..10 N..10 1.4e-02 6.0e-04 5..10 N..100 2.2e-02 5.3e-06 10..15 N..15 1.0e-02 3.2e-04 10..15 N..100 1.0e-02 2.2e-05 15..100 N..100 6.1e-03 2.7e-06 For N,M>100 accuracy checks weren't put into practice, but taking into account characteristics of asymptotic approximation used, precision should not be sharply different from the values for interval [5, 100]. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void mannwhitneyutest(const ap::real_1d_array& x, int n, const ap::real_1d_array& y, int m, double& bothtails, double& lefttail, double& righttail);
matdet
unitcmatrixdet
function/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - size of matrix A. Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/ap::complex cmatrixdet(ap::complex_2d_array a, int n);
cmatrixludet
function/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - size of matrix A. Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/ap::complex cmatrixludet(const ap::complex_2d_array& a, const ap::integer_1d_array& pivots, int n);
rmatrixdet
function/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - size of matrix A. Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/double rmatrixdet(ap::real_2d_array a, int n);
rmatrixludet
function/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - size of matrix A. Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/double rmatrixludet(const ap::real_2d_array& a, const ap::integer_1d_array& pivots, int n);
spdmatrixcholeskydet
function/************************************************************************* Determinant calculation of the matrix given by the Cholesky decomposition. Input parameters: A - Cholesky decomposition, output of SMatrixCholesky subroutine. N - size of matrix A. As the determinant is equal to the product of squares of diagonal elements, it’s not necessary to specify which triangle - lower or upper - the matrix is stored in. Result: matrix determinant. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/double spdmatrixcholeskydet(const ap::real_2d_array& a, int n);
spdmatrixdet
function/************************************************************************* Determinant calculation of the symmetric positive definite matrix. Input parameters: A - matrix. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper = True, then the symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used by subroutine. Similarly, if IsUpper = False, then A is given by its lower triangle. Result: determinant of matrix A. If matrix A is not positive definite, then subroutine returns -1. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/double spdmatrixdet(ap::real_2d_array a, int n, bool isupper);
matgen
unitcmatrixrndcond
function/************************************************************************* Generation of random NxN complex matrix with given condition number C and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixrndcond(int n, double c, ap::complex_2d_array& a);
cmatrixrndorthogonal
function/************************************************************************* Generation of a random Haar distributed orthogonal complex matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixrndorthogonal(int n, ap::complex_2d_array& a);
cmatrixrndorthogonalfromtheleft
function/************************************************************************* Multiplication of MxN complex matrix by MxM random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixrndorthogonalfromtheleft(ap::complex_2d_array& a, int m, int n);
cmatrixrndorthogonalfromtheright
function/************************************************************************* Multiplication of MxN complex matrix by NxN random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void cmatrixrndorthogonalfromtheright(ap::complex_2d_array& a, int m, int n);
hmatrixrndcond
function/************************************************************************* Generation of random NxN Hermitian matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void hmatrixrndcond(int n, double c, ap::complex_2d_array& a);
hmatrixrndmultiply
function/************************************************************************* Hermitian multiplication of NxN matrix by random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q^H*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void hmatrixrndmultiply(ap::complex_2d_array& a, int n);
hpdmatrixrndcond
function/************************************************************************* Generation of random NxN Hermitian positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random HPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void hpdmatrixrndcond(int n, double c, ap::complex_2d_array& a);
rmatrixrndcond
function/************************************************************************* Generation of random NxN matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixrndcond(int n, double c, ap::real_2d_array& a);
rmatrixrndorthogonal
function/************************************************************************* Generation of a random uniformly distributed (Haar) orthogonal matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixrndorthogonal(int n, ap::real_2d_array& a);
rmatrixrndorthogonalfromtheleft
function/************************************************************************* Multiplication of MxN matrix by MxM random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixrndorthogonalfromtheleft(ap::real_2d_array& a, int m, int n);
rmatrixrndorthogonalfromtheright
function/************************************************************************* Multiplication of MxN matrix by NxN random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void rmatrixrndorthogonalfromtheright(ap::real_2d_array& a, int m, int n);
smatrixrndcond
function/************************************************************************* Generation of random NxN symmetric matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void smatrixrndcond(int n, double c, ap::real_2d_array& a);
smatrixrndmultiply
function/************************************************************************* Symmetric multiplication of NxN matrix by random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q'*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void smatrixrndmultiply(ap::real_2d_array& a, int n);
spdmatrixrndcond
function/************************************************************************* Generation of random NxN symmetric positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random SPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void spdmatrixrndcond(int n, double c, ap::real_2d_array& a);
matinv
unitcmatrixinverse
function/************************************************************************* Inversion of a general matrix. Input parameters: A - matrix, array[0..N-1,0..N-1]. N - size of A. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void cmatrixinverse(ap::complex_2d_array& a, int n, int& info, matinvreport& rep);
cmatrixluinverse
function/************************************************************************* Inversion of a matrix given by its LU decomposition. INPUT PARAMETERS: A - LU decomposition of the matrix (output of CMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition (the output of CMatrixLU subroutine). N - size of matrix A. OUTPUT PARAMETERS: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/void cmatrixluinverse(ap::complex_2d_array& a, const ap::integer_1d_array& pivots, int n, int& info, matinvreport& rep);
cmatrixtrinverse
function/************************************************************************* Triangular matrix inverse (complex) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. Input parameters: A - matrix, array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Output parameters: Info - same as for RMatrixLUInverse Rep - same as for RMatrixLUInverse A - same as for RMatrixLUInverse. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/void cmatrixtrinverse(ap::complex_2d_array& a, int n, bool isupper, bool isunit, int& info, matinvreport& rep);
hpdmatrixcholeskyinverse
function/************************************************************************* Inversion of a Hermitian positive definite matrix which is given by Cholesky decomposition. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U’*U or A = L*L'. Output of HPDMatrixCholesky subroutine. N - size of matrix A. IsUpper – storage format. If IsUpper = True, then matrix A is given as A = U'*U (matrix contains upper triangle). Similarly, if IsUpper = False, then A = L*L'. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void hpdmatrixcholeskyinverse(ap::complex_2d_array& a, int n, bool isupper, int& info, matinvreport& rep);
hpdmatrixinverse
function/************************************************************************* Inversion of a Hermitian positive definite matrix. Given an upper or lower triangle of a Hermitian positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1,0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then the upper triangle of matrix A is given, otherwise the lower triangle is given. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void hpdmatrixinverse(ap::complex_2d_array& a, int n, bool isupper, int& info, matinvreport& rep);
rmatrixinverse
function/************************************************************************* Inversion of a general matrix. Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse Result: True, if the matrix is not singular. False, if the matrix is singular. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void rmatrixinverse(ap::real_2d_array& a, int n, int& info, matinvreport& rep);
rmatrixluinverse
function/************************************************************************* Inversion of a matrix given by its LU decomposition. INPUT PARAMETERS: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition (the output of RMatrixLU subroutine). N - size of matrix A. OUTPUT PARAMETERS: Info - return code: * -3 A is singular, or VERY close to singular. it is filled by zeros in such cases. * -1 N<=0 was passed, or incorrect Pivots was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - solver report, see below for more info A - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. SOLVER REPORT Subroutine sets following fields of the Rep structure: * R1 reciprocal of condition number: 1/cond(A), 1-norm. * RInf reciprocal of condition number: 1/cond(A), inf-norm. -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixluinverse(ap::real_2d_array& a, const ap::integer_1d_array& pivots, int n, int& info, matinvreport& rep);
rmatrixtrinverse
function/************************************************************************* Triangular matrix inverse (real) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. Input parameters: A - matrix, array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Output parameters: Info - same as for RMatrixLUInverse Rep - same as for RMatrixLUInverse A - same as for RMatrixLUInverse. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/void rmatrixtrinverse(ap::real_2d_array& a, int n, bool isupper, bool isunit, int& info, matinvreport& rep);
spdmatrixcholeskyinverse
function/************************************************************************* Inversion of a symmetric positive definite matrix which is given by Cholesky decomposition. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U’*U or A = L*L'. Output of SPDMatrixCholesky subroutine. N - size of matrix A. IsUpper – storage format. If IsUpper = True, then matrix A is given as A = U'*U (matrix contains upper triangle). Similarly, if IsUpper = False, then A = L*L'. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void spdmatrixcholeskyinverse(ap::real_2d_array& a, int n, bool isupper, int& info, matinvreport& rep);
spdmatrixinverse
function/************************************************************************* Inversion of a symmetric positive definite matrix. Given an upper or lower triangle of a symmetric positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1,0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then the upper triangle of matrix A is given, otherwise the lower triangle is given. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void spdmatrixinverse(ap::real_2d_array& a, int n, bool isupper, int& info, matinvreport& rep);
minasa
unitminasacreate
function/************************************************************************* NONLINEAR BOUND CONSTRAINED OPTIMIZATION USING MODIFIED WILLIAM W. HAGER AND HONGCHAO ZHANG ACTIVE SET ALGORITHM The subroutine minimizes function F(x) of N arguments with bound constraints: BndL[i] <= x[i] <= BndU[i] This method is globally convergent as long as grad(f) is Lipschitz continuous on a level set: L = { x : f(x)<=f(x0) }. INPUT PARAMETERS: N - problem dimension. N>0 X - initial solution approximation, array[0..N-1]. BndL - lower bounds, array[0..N-1]. all elements MUST be specified, i.e. all variables are bounded. However, if some (all) variables are unbounded, you may specify very small number as bound: -1000, -1.0E6 or -1.0E300, or something like that. BndU - upper bounds, array[0..N-1]. all elements MUST be specified, i.e. all variables are bounded. However, if some (all) variables are unbounded, you may specify very large number as bound: +1000, +1.0E6 or +1.0E300, or something like that. EpsG - positive number which defines a precision of search. The subroutine finishes its work if the condition ||G|| < EpsG is satisfied, where ||.|| means Euclidian norm, G - gradient, X - current approximation. EpsF - positive number which defines a precision of search. The subroutine finishes its work if on iteration number k+1 the condition |F(k+1)-F(k)| <= EpsF*max{|F(k)|, |F(k+1)|, 1} is satisfied. EpsX - positive number which defines a precision of search. The subroutine finishes its work if on iteration number k+1 the condition |X(k+1)-X(k)| <= EpsX is fulfilled. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. OUTPUT PARAMETERS: State - structure used for reverse communication. This function initializes State structure with default optimization parameters (stopping conditions, step size, etc.). Use MinASASet??????() functions to tune optimization parameters. After all optimization parameters are tuned, you should use MinASAIteration() function to advance algorithm iterations. NOTES: 1. you may tune stopping conditions with MinASASetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinASASetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 25.03.2010 by Bochkanov Sergey *************************************************************************/void minasacreate(int n, const ap::real_1d_array& x, const ap::real_1d_array& bndl, const ap::real_1d_array& bndu, minasastate& state);
minasaiteration
function/************************************************************************* One ASA iteration Called after initialization with MinASACreate. See HTML documentation for examples. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinASACreate. RESULT: * if function returned False, iterative proces has converged. Use MinLBFGSResults() to obtain optimization results. * if subroutine returned True, then, depending on structure fields, we have one of the following situations === FUNC/GRAD REQUEST === State.NeedFG is True => function value/gradient are needed. Caller should calculate function value State.F and gradient State.G[0..N-1] at State.X[0..N-1] and call MinLBFGSIteration() again. === NEW INTERATION IS REPORTED === State.XUpdated is True => one more iteration was made. State.X contains current position, State.F contains function value at X. You can read info from these fields, but never modify them because they contain the only copy of optimization algorithm state. One and only one of these fields (NeedFG, XUpdated) is true on return. New iterations are reported only when reports are explicitly turned on by MinLBFGSSetXRep() function, so if you never called it, you can expect that NeedFG is always True. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/bool minasaiteration(minasastate& state);
minasaresults
function/************************************************************************* Conjugate gradient results Called after MinASA returned False. INPUT PARAMETERS: State - algorithm state (used by MinASAIteration). OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -2 rounding errors prevent further improvement. X contains best point found. * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations * ActiveConstraints contains number of active constraints -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/void minasaresults(const minasastate& state, ap::real_1d_array& x, minasareport& rep);
minasasetalgorithm
function/************************************************************************* This function sets optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinASACreate() UAType - algorithm type: * -1 automatic selection of the best algorithm * 0 DY (Dai and Yuan) algorithm * 1 Hybrid DY-HS algorithm -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minasasetalgorithm(minasastate& state, int algotype);
minasasetcond
function/************************************************************************* This function sets stopping conditions for the ASA optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinASACreate() EpsG - >=0 The subroutine finishes its work if the condition ||G||<EpsG is satisfied, where ||.|| means Euclidian norm, G - gradient. EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |X(k+1)-X(k)| <= EpsX is fulfilled. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minasasetcond(minasastate& state, double epsg, double epsf, double epsx, int maxits);
minasasetstpmax
function/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCGCreate() StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minasasetstpmax(minasastate& state, double stpmax);
minasasetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinASACreate() NeedXRep- whether iteration reports are needed or not Usually algorithm returns from MinASAIteration() only when it needs function/gradient. However, with this function we can let it stop after each iteration (one iteration may include more than one function evaluation), which is indicated by XUpdated field. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minasasetxrep(minasastate& state, bool needxrep);
int n; int i; minasastate state; minasareport rep; ap::real_1d_array s; ap::real_1d_array bndl; ap::real_1d_array bndu; double x; double y; double z; // // Function being minimized: // F = x+2y+3z subject to 0<=x<=1, 0<=y<=1, 0<=z<=1. // n = 3; s.setlength(n); bndl.setlength(n); bndu.setlength(n); for(i = 0; i <= n-1; i++) { s(i) = 1; bndl(i) = 0; bndu(i) = 1; } minasacreate(n, s, bndl, bndu, state); minasasetcond(state, 0.0, 0.0, 0.00001, 0); minasasetxrep(state, true); printf("\n\nF = x+2y+3z subject to 0<=x<=1, 0<=y<=1, 0<=z<=1\n"); printf("OPTIMIZATION STARTED\n"); while(minasaiteration(state)) { if( state.needfg ) { x = state.x(0); y = state.x(1); z = state.x(2); state.f = x+2*y+3*z; state.g(0) = 1; state.g(1) = 2; state.g(2) = 3; } if( state.xupdated ) { printf(" F(%4.2lf,%4.2lf,%4.2lf)=%0.3lf\n", double(state.x(0)), double(state.x(1)), double(state.x(2)), double(state.f)); } } printf("OPTIMIZATION STOPPED\n"); minasaresults(state, s, rep); // // output results // printf("X = %4.2lf (should be 0.00)\n", double(s(0))); printf("Y = %4.2lf (should be 0.00)\n", double(s(1))); printf("Z = %4.2lf (should be 0.00)\n\n\n", double(s(2)));
int n; int i; minasastate state; minasareport rep; ap::real_1d_array s; ap::real_1d_array bndl; ap::real_1d_array bndu; double x; double y; double z; // // Function being minimized: // F = x+4y+9z subject to 0<=x<=1, 0<=y<=1, 0<=z<=1. // // Take a look at MinASASetStpMax() - it restricts step length by // a small value, so we can see the current point traveling through // a feasible set, sticking to its bounds. // n = 3; s.setlength(n); bndl.setlength(n); bndu.setlength(n); for(i = 0; i <= n-1; i++) { s(i) = 1; bndl(i) = 0; bndu(i) = 1; } minasacreate(n, s, bndl, bndu, state); minasasetcond(state, 0.0, 0.0, 0.00001, 0); minasasetxrep(state, true); minasasetstpmax(state, 0.2); printf("\n\nF = x+4y+9z subject to 0<=x<=1, 0<=y<=1, 0<=z<=1\n"); printf("OPTIMIZATION STARTED\n"); while(minasaiteration(state)) { if( state.needfg ) { x = state.x(0); y = state.x(1); z = state.x(2); state.f = x+4*y+9*z; state.g(0) = 1; state.g(1) = 4; state.g(2) = 9; } if( state.xupdated ) { printf(" F(%4.2lf, %4.2lf, %4.2lf) = %0.3lf\n", double(state.x(0)), double(state.x(1)), double(state.x(2)), double(state.f)); } } printf("OPTIMIZATION STOPPED\n"); minasaresults(state, s, rep); // // output results // printf("X = %4.2lf (should be 0.00)\n", double(s(0))); printf("Y = %4.2lf (should be 0.00)\n", double(s(1))); printf("Z = %4.2lf (should be 0.00)\n\n\n", double(s(2)));
mincg
unitmincgcreate
function/************************************************************************* NONLINEAR CONJUGATE GRADIENT METHOD The subroutine minimizes function F(x) of N arguments by using one of the nonlinear conjugate gradient methods. These CG methods are globally convergent (even on non-convex functions) as long as grad(f) is Lipschitz continuous in a some neighborhood of the L = { x : f(x)<=f(x0) }. INPUT PARAMETERS: N - problem dimension. N>0 X - initial solution approximation, array[0..N-1]. EpsG - positive number which defines a precision of search. The subroutine finishes its work if the condition ||G|| < EpsG is satisfied, where ||.|| means Euclidian norm, G - gradient, X - current approximation. EpsF - positive number which defines a precision of search. The subroutine finishes its work if on iteration number k+1 the condition |F(k+1)-F(k)| <= EpsF*max{|F(k)|, |F(k+1)|, 1} is satisfied. EpsX - positive number which defines a precision of search. The subroutine finishes its work if on iteration number k+1 the condition |X(k+1)-X(k)| <= EpsX is fulfilled. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. OUTPUT PARAMETERS: State - structure used for reverse communication. See also MinCGIteration, MinCGResults NOTE: Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 25.03.2010 by Bochkanov Sergey *************************************************************************/void mincgcreate(int n, const ap::real_1d_array& x, mincgstate& state);
mincgiteration
function/************************************************************************* One conjugate gradient iteration Called after initialization with MinCG. See HTML documentation for examples. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCG. RESULT: * if function returned False, iterative proces has converged. Use MinLBFGSResults() to obtain optimization results. * if subroutine returned True, then, depending on structure fields, we have one of the following situations === FUNC/GRAD REQUEST === State.NeedFG is True => function value/gradient are needed. Caller should calculate function value State.F and gradient State.G[0..N-1] at State.X[0..N-1] and call MinLBFGSIteration() again. === NEW INTERATION IS REPORTED === State.XUpdated is True => one more iteration was made. State.X contains current position, State.F contains function value at X. You can read info from these fields, but never modify them because they contain the only copy of optimization algorithm state. One and only one of these fields (NeedFG, XUpdated) is true on return. New iterations are reported only when reports are explicitly turned on by MinLBFGSSetXRep() function, so if you never called it, you can expect that NeedFG is always True. -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/bool mincgiteration(mincgstate& state);
mincgresults
function/************************************************************************* Conjugate gradient results Called after MinCG returned False. INPUT PARAMETERS: State - algorithm state (used by MinCGIteration). OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -2 rounding errors prevent further improvement. X contains best point found. * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/void mincgresults(const mincgstate& state, ap::real_1d_array& x, mincgreport& rep);
mincgsetcgtype
function/************************************************************************* This function sets CG algorithm. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCGCreate() CGType - algorithm type: * -1 automatic selection of the best algorithm * 0 DY (Dai and Yuan) algorithm * 1 Hybrid DY-HS algorithm -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void mincgsetcgtype(mincgstate& state, int cgtype);
mincgsetcond
function/************************************************************************* This function sets stopping conditions for CG optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCGCreate() EpsG - >=0 The subroutine finishes its work if the condition ||G||<EpsG is satisfied, where ||.|| means Euclidian norm, G - gradient. EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |X(k+1)-X(k)| <= EpsX is fulfilled. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void mincgsetcond(mincgstate& state, double epsg, double epsf, double epsx, int maxits);
mincgsetstpmax
function/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCGCreate() StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void mincgsetstpmax(mincgstate& state, double stpmax);
mincgsetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCGCreate() NeedXRep- whether iteration reports are needed or not Usually algorithm returns from MinCGIteration() only when it needs function/gradient. However, with this function we can let it stop after each iteration (one iteration may include more than one function evaluation), which is indicated by XUpdated field. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void mincgsetxrep(mincgstate& state, bool needxrep);
int n; mincgstate state; mincgreport rep; ap::real_1d_array s; double x; double y; // // Function minimized: // F = (x-1)^4 + (y-x)^2 // N = 2 - task dimension. // n = 2; s.setlength(2); s(0) = 10; s(1) = 11; mincgcreate(n, s, state); mincgsetcond(state, 0.0, 0.0, 0.00001, 0); mincgsetxrep(state, true); printf("\n\nF = (x-1)^4 + (y-x)^2\n"); printf("OPTIMIZATION STARTED\n"); while(mincgiteration(state)) { if( state.needfg ) { x = state.x(0); y = state.x(1); state.f = ap::sqr(ap::sqr(x-1))+ap::sqr(y-x); state.g(0) = 4*ap::sqr(x-1)*(x-1)+2*(x-y); state.g(1) = 2*(y-x); } if( state.xupdated ) { printf(" F(%8.5lf,%8.5lf)=%0.5lf\n", double(state.x(0)), double(state.x(1)), double(state.f)); } } printf("OPTIMIZATION STOPPED\n"); mincgresults(state, s, rep); // // output results // printf("X = %4.2lf (should be 1.00)\n", double(s(0))); printf("Y = %4.2lf (should be 1.00)\n\n\n", double(s(1)));
int n; mincgstate state; mincgreport rep; ap::real_1d_array s; double x; double y; // // Function minimized: // F = exp(x-1) + exp(1-x) + (y-x)^2 // N = 2 - task dimension. // // Take a look at MinCGSetStpMax() call - it prevents us // from overflow (which may be result of too large step). // Try to comment it and see what will happen. // n = 2; s.setlength(2); s(0) = 10; s(1) = ap::randomreal()-0.5; mincgcreate(n, s, state); mincgsetcond(state, 0.0, 0.0, 0.0001, 0); mincgsetxrep(state, true); mincgsetstpmax(state, 1.0); printf("\n\nF = exp(x-1) + exp(1-x) + (y-x)^2\n"); printf("OPTIMIZATION STARTED\n"); while(mincgiteration(state)) { if( state.needfg ) { x = state.x(0); y = state.x(1); state.f = exp(x-1)+exp(1-x)+ap::sqr(y-x); state.g(0) = exp(x-1)-exp(1-x)+2*(x-y); state.g(1) = 2*(y-x); } if( state.xupdated ) { printf(" F(%8.5lf,%8.5lf)=%0.5lf\n", double(state.x(0)), double(state.x(1)), double(state.f)); } } printf("OPTIMIZATION STOPPED\n"); mincgresults(state, s, rep); // // output results // printf("X = %4.2lf (should be 1.00)\n", double(s(0))); printf("Y = %4.2lf (should be 1.00)\n\n\n", double(s(1)));
minlbfgs
unitminlbfgscreate
function/************************************************************************* LIMITED MEMORY BFGS METHOD FOR LARGE SCALE OPTIMIZATION The subroutine minimizes function F(x) of N arguments by using a quasi- Newton method (LBFGS scheme) which is optimized to use a minimum amount of memory. The subroutine generates the approximation of an inverse Hessian matrix by using information about the last M steps of the algorithm (instead of N). It lessens a required amount of memory from a value of order N^2 to a value of order 2*N*M. INPUT PARAMETERS: N - problem dimension. N>0 M - number of corrections in the BFGS scheme of Hessian approximation update. Recommended value: 3<=M<=7. The smaller value causes worse convergence, the bigger will not cause a considerably better convergence, but will cause a fall in the performance. M<=N. X - initial solution approximation, array[0..N-1]. OUTPUT PARAMETERS: State - structure used for reverse communication. This function initializes State structure with default optimization parameters (stopping conditions, step size, etc.). Use MinLBFGSSet??????() functions to tune optimization parameters. After all optimization parameters are tuned, you should use MinLBFGSIteration() function to advance algorithm iterations. NOTES: 1. you may tune stopping conditions with MinLBFGSSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLBFGSSetStpMax() function to bound algorithm's steps. However, L-BFGS rarely needs such a tuning. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlbfgscreate(int n, int m, const ap::real_1d_array& x, minlbfgsstate& state);
Examples: minlbfgs_1 minlbfgs_2
minlbfgscreatex
function/************************************************************************* Extended subroutine for internal use only. Accepts additional parameters: Flags - additional settings: * Flags = 0 means no additional settings * Flags = 1 "do not allocate memory". used when solving a many subsequent tasks with same N/M values. First call MUST be without this flag bit set, subsequent calls of MinLBFGS with same MinLBFGSState structure can set Flags to 1. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlbfgscreatex(int n, int m, const ap::real_1d_array& x, int flags, minlbfgsstate& state);
minlbfgsiteration
function/************************************************************************* L-BFGS iterations Called after initialization with MinLBFGSCreate() function. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinLBFGSCreate() RESULT: * if function returned False, iterative proces has converged. Use MinLBFGSResults() to obtain optimization results. * if subroutine returned True, then, depending on structure fields, we have one of the following situations === FUNC/GRAD REQUEST === State.NeedFG is True => function value/gradient are needed. Caller should calculate function value State.F and gradient State.G[0..N-1] at State.X[0..N-1] and call MinLBFGSIteration() again. === NEW INTERATION IS REPORTED === State.XUpdated is True => one more iteration was made. State.X contains current position, State.F contains function value at X. You can read info from these fields, but never modify them because they contain the only copy of optimization algorithm state. One and only one of these fields (NeedFG, XUpdated) is true on return. New iterations are reported only when reports are explicitly turned on by MinLBFGSSetXRep() function, so if you never called it, you can expect that NeedFG is always True. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/bool minlbfgsiteration(minlbfgsstate& state);
Examples: minlbfgs_1 minlbfgs_2
minlbfgsresults
function/************************************************************************* L-BFGS algorithm results Called after MinLBFGSIteration() returned False. INPUT PARAMETERS: State - algorithm state (used by MinLBFGSIteration). OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -2 rounding errors prevent further improvement. X contains best point found. * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlbfgsresults(const minlbfgsstate& state, ap::real_1d_array& x, minlbfgsreport& rep);
Examples: minlbfgs_1 minlbfgs_2
minlbfgssetcond
function/************************************************************************* This function sets stopping conditions for L-BFGS optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinLBFGSCreate() EpsG - >=0 The subroutine finishes its work if the condition ||G||<EpsG is satisfied, where ||.|| means Euclidian norm, G - gradient. EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |X(k+1)-X(k)| <= EpsX is fulfilled. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlbfgssetcond(minlbfgsstate& state, double epsg, double epsf, double epsx, int maxits);
minlbfgssetstpmax
function/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinLBFGSCreate() StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlbfgssetstpmax(minlbfgsstate& state, double stpmax);
minlbfgssetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinLBFGSCreate() NeedXRep- whether iteration reports are needed or not Usually algorithm returns from MinLBFGSIteration() only when it needs function/gradient/ (which is indicated by NeedFG field. However, with this function we can let it stop after each iteration (one iteration may include more than one function evaluation), which is indicated by XUpdated field. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlbfgssetxrep(minlbfgsstate& state, bool needxrep);
int n; int m; minlbfgsstate state; minlbfgsreport rep; ap::real_1d_array s; double x; double y; // // Function minimized: // F = exp(x-1) + exp(1-x) + (y-x)^2 // N = 2 - task dimension // M = 1 - build tank-1 model // n = 2; m = 1; s.setlength(2); s(0) = ap::randomreal()-0.5; s(1) = ap::randomreal()-0.5; minlbfgscreate(n, m, s, state); minlbfgssetcond(state, 0.0, 0.0, 0.0001, 0); while(minlbfgsiteration(state)) { if( state.needfg ) { x = state.x(0); y = state.x(1); state.f = exp(x-1)+exp(1-x)+ap::sqr(y-x); state.g(0) = exp(x-1)-exp(1-x)+2*(x-y); state.g(1) = 2*(y-x); } } minlbfgsresults(state, s, rep); // // output results // printf("\n\nF = exp(x-1) + exp(1-x) + (y-x)^2\n"); printf("X = %4.2lf (should be 1.00)\n", double(s(0))); printf("Y = %4.2lf (should be 1.00)\n\n\n", double(s(1)));
int n; int m; minlbfgsstate state; minlbfgsreport rep; ap::real_1d_array s; double x; double y; // // Function minimized: // F = exp(x-1) + exp(1-x) + (y-x)^2 // N = 2 - task dimension // M = 1 - build tank-1 model // n = 2; m = 1; s.setlength(2); s(0) = 10; s(1) = ap::randomreal()-0.5; minlbfgscreate(n, m, s, state); minlbfgssetcond(state, 0.0, 0.0, 0.0001, 0); minlbfgssetxrep(state, true); printf("\n\nF = exp(x-1) + exp(1-x) + (y-x)^2\n"); printf("OPTIMIZATION STARTED\n"); while(minlbfgsiteration(state)) { if( state.needfg ) { x = state.x(0); y = state.x(1); state.f = exp(x-1)+exp(1-x)+ap::sqr(y-x); state.g(0) = exp(x-1)-exp(1-x)+2*(x-y); state.g(1) = 2*(y-x); } if( state.xupdated ) { printf(" F(%8.5lf,%8.5lf)=%0.5lf\n", double(state.x(0)), double(state.x(1)), double(state.f)); } } printf("OPTIMIZATION STOPPED\n"); minlbfgsresults(state, s, rep); // // output results // printf("X = %4.2lf (should be 1.00)\n", double(s(0))); printf("Y = %4.2lf (should be 1.00)\n\n\n", double(s(1)));
minlm
unitminlmcreatefgh
function/************************************************************************* LEVENBERG-MARQUARDT-LIKE METHOD FOR NON-LINEAR OPTIMIZATION Optimization using function gradient and Hessian. Algorithm - Levenberg- Marquardt modification with L-BFGS pre-optimization and internal pre-conditioned L-BFGS optimization after each Levenberg-Marquardt step. Function F has general form (not "sum-of-squares"): F = F(x[0], ..., x[n-1]) EXAMPLE See HTML-documentation. INPUT PARAMETERS: N - dimension, N>1 X - initial solution, array[0..N-1] OUTPUT PARAMETERS: State - structure which stores algorithm state between subsequent calls of MinLMIteration. Used for reverse communication. This structure should be passed to MinLMIteration subroutine. See also MinLMIteration, MinLMResults. NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/void minlmcreatefgh(const int& n, const ap::real_1d_array& x, minlmstate& state);
Examples: minlm_fgh
minlmcreatefgj
function/************************************************************************* LEVENBERG-MARQUARDT-LIKE METHOD FOR NON-LINEAR OPTIMIZATION Optimization using function gradient and Jacobian. Algorithm - Levenberg- Marquardt modification with L-BFGS pre-optimization and internal pre-conditioned L-BFGS optimization after each Levenberg-Marquardt step. Function F is represented as sum of squares: F = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1]) EXAMPLE See HTML-documentation. INPUT PARAMETERS: N - dimension, N>1 M - number of functions f[i] X - initial solution, array[0..N-1] OUTPUT PARAMETERS: State - structure which stores algorithm state between subsequent calls of MinLMIteration. Used for reverse communication. This structure should be passed to MinLMIteration subroutine. See also MinLMIteration, MinLMResults. NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/void minlmcreatefgj(const int& n, const int& m, const ap::real_1d_array& x, minlmstate& state);
Examples: minlm_fgj
minlmcreatefj
function/************************************************************************* CLASSIC LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR OPTIMIZATION Optimization using Jacobi matrix. Algorithm - classic Levenberg-Marquardt method. Function F is represented as sum of squares: F = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1]) EXAMPLE See HTML-documentation. INPUT PARAMETERS: N - dimension, N>1 M - number of functions f[i] X - initial solution, array[0..N-1] OUTPUT PARAMETERS: State - structure which stores algorithm state between subsequent calls of MinLMIteration. Used for reverse communication. This structure should be passed to MinLMIteration subroutine. See also MinLMIteration, MinLMResults. NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/void minlmcreatefj(const int& n, const int& m, const ap::real_1d_array& x, minlmstate& state);
minlmiteration
function/************************************************************************* One Levenberg-Marquardt iteration. Called after inialization of State structure with MinLMXXX subroutine. See HTML docs for examples. Input parameters: State - structure which stores algorithm state between subsequent calls and which is used for reverse communication. Must be initialized with MinLMXXX call first. If subroutine returned False, iterative algorithm has converged. If subroutine returned True, then: * if State.NeedF=True, - function value F at State.X[0..N-1] is required * if State.NeedFG=True - function value F and gradient G are required * if State.NeedFiJ=True - function vector f[i] and Jacobi matrix J are required * if State.NeedFGH=True - function value F, gradient G and Hesian H are required * if State.XUpdated=True - algorithm reports about new iteration, State.X contains current point, State.F contains function value. One and only one of this fields can be set at time. Results are stored: * function value - in MinLMState.F * gradient - in MinLMState.G[0..N-1] * Jacobi matrix - in MinLMState.J[0..M-1,0..N-1] * Hessian - in MinLMState.H[0..N-1,0..N-1] -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/bool minlmiteration(minlmstate& state);
Examples: minlm_fgh minlm_fgj minlm_fj minlm_fj2
minlmresults
function/************************************************************************* Levenberg-Marquardt algorithm results Called after MinLMIteration returned False. Input parameters: State - algorithm state (used by MinLMIteration). Output parameters: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient is no more than EpsG. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * Rep.NFunc - number of function calculations * Rep.NJac - number of Jacobi matrix calculations * Rep.NGrad - number of gradient calculations * Rep.NHess - number of Hessian calculations * Rep.NCholesky - number of Cholesky decomposition calculations -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/void minlmresults(const minlmstate& state, ap::real_1d_array& x, minlmreport& rep);
Examples: minlm_fgh minlm_fgj minlm_fj minlm_fj2
minlmsetcond
function/************************************************************************* This function sets stopping conditions for Levenberg-Marquardt optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinLMCreate???() EpsG - >=0 The subroutine finishes its work if the condition ||G||<EpsG is satisfied, where ||.|| means Euclidian norm, G - gradient. EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |X(k+1)-X(k)| <= EpsX is fulfilled. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Only Levenberg-Marquardt iterations are counted (L-BFGS/CG iterations are NOT counted because their cost is very low copared to that of LM). Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlmsetcond(minlmstate& state, double epsg, double epsf, double epsx, int maxits);
minlmsetstpmax
function/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinCGCreate???() StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlmsetstpmax(minlmstate& state, double stpmax);
minlmsetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state between calls and which is used for reverse communication. Must be initialized with MinLMCreate???() NeedXRep- whether iteration reports are needed or not Usually algorithm returns from MinLMIteration() only when it needs function/gradient/Hessian. However, with this function we can let it stop after each iteration (one iteration may include more than one function evaluation), which is indicated by XUpdated field. Both Levenberg-Marquardt and L-BFGS iterations are reported. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void minlmsetxrep(minlmstate& state, bool needxrep);
minlmstate state; minlmreport rep; ap::real_1d_array s; double x; double y; // // Example of solving simple task using FGH scheme. // // Function minimized: // F = (x-2*y)^2 + (x-2)^2 + (y-1)^2 // exact solution is (2,1). // s.setlength(2); s(0) = ap::randomreal()-0.5; s(1) = ap::randomreal()-0.5; minlmcreatefgh(2, s, state); minlmsetcond(state, 0.0, 0.0, 0.001, 0); while(minlmiteration(state)) { x = state.x(0); y = state.x(1); if( state.needf ) { state.f = ap::sqr(x-2*y)+ap::sqr(x-2)+ap::sqr(y-1); } if( state.needfg ) { state.f = ap::sqr(x-2*y)+ap::sqr(x-2)+ap::sqr(y-1); state.g(0) = 2*(x-2*y)+2*(x-2)+0; state.g(1) = -4*(x-2*y)+0+2*(y-1); } if( state.needfgh ) { state.f = ap::sqr(x-2*y)+ap::sqr(x-2)+ap::sqr(y-1); state.g(0) = 2*(x-2*y)+2*(x-2)+0; state.g(1) = -4*(x-2*y)+0+2*(y-1); state.h(0,0) = 4; state.h(1,0) = -4; state.h(0,1) = -4; state.h(1,1) = 10; } } minlmresults(state, s, rep); // // output results // printf("X = %4.2lf (correct value - 2.00)\n", double(s(0))); printf("Y = %4.2lf (correct value - 1.00)\n", double(s(1))); printf("TerminationType = %0ld (should be 2 - stopping when step is small enough)\n", long(rep.terminationtype)); printf("NFunc = %0ld\n", long(rep.nfunc)); printf("NJac = %0ld\n", long(rep.njac)); printf("NGrad = %0ld\n", long(rep.ngrad)); printf("NHess = %0ld\n", long(rep.nhess));
minlmstate state; minlmreport rep; ap::real_1d_array s; double x; double y; // // Example of solving simple task using FGJ scheme. // // Function minimized: // F = (x-2*y)^2 + (x-2)^2 + (y-1)^2 // exact solution is (2,1). // s.setlength(2); s(0) = ap::randomreal()-0.5; s(1) = ap::randomreal()-0.5; minlmcreatefgj(2, 3, s, state); minlmsetcond(state, 0.0, 0.0, 0.001, 0); while(minlmiteration(state)) { x = state.x(0); y = state.x(1); if( state.needf ) { state.f = ap::sqr(x-2*y)+ap::sqr(x-2)+ap::sqr(y-1); } if( state.needfg ) { state.f = ap::sqr(x-2*y)+ap::sqr(x-2)+ap::sqr(y-1); state.g(0) = 2*(x-2*y)+2*(x-2)+0; state.g(1) = -4*(x-2*y)+0+2*(y-1); } if( state.needfij ) { state.fi(0) = x-2*y; state.fi(1) = x-2; state.fi(2) = y-1; state.j(0,0) = 1; state.j(0,1) = -2; state.j(1,0) = 1; state.j(1,1) = 0; state.j(2,0) = 0; state.j(2,1) = 1; } } minlmresults(state, s, rep); // // output results // printf("X = %4.2lf (correct value - 2.00)\n", double(s(0))); printf("Y = %4.2lf (correct value - 1.00)\n", double(s(1))); printf("TerminationType = %0ld (should be 2 - stopping when step is small enough)\n", long(rep.terminationtype)); printf("NFunc = %0ld\n", long(rep.nfunc)); printf("NJac = %0ld\n", long(rep.njac)); printf("NGrad = %0ld\n", long(rep.ngrad)); printf("NHess = %0ld\n", long(rep.nhess));
minlmstate state; minlmreport rep; ap::real_1d_array s; double x; double y; // // Example of solving simple task using FJ scheme. // // Function minimized: // F = (x-2*y)^2 + (x-2)^2 + (y-1)^2 // exact solution is (2,1). // s.setlength(2); s(0) = ap::randomreal()-0.5; s(1) = ap::randomreal()-0.5; minlmcreatefj(2, 3, s, state); minlmsetcond(state, 0.0, 0.0, 0.001, 0); while(minlmiteration(state)) { x = state.x(0); y = state.x(1); if( state.needf ) { state.f = ap::sqr(x-2*y)+ap::sqr(x-2)+ap::sqr(y-1); } if( state.needfij ) { state.fi(0) = x-2*y; state.fi(1) = x-2; state.fi(2) = y-1; state.j(0,0) = 1; state.j(0,1) = -2; state.j(1,0) = 1; state.j(1,1) = 0; state.j(2,0) = 0; state.j(2,1) = 1; } } minlmresults(state, s, rep); // // output results // printf("X = %4.2lf (correct value - 2.00)\n", double(s(0))); printf("Y = %4.2lf (correct value - 1.00)\n", double(s(1))); printf("TerminationType = %0ld (should be 2 - stopping when step is small enough)\n", long(rep.terminationtype)); printf("NFunc = %0ld\n", long(rep.nfunc)); printf("NJac = %0ld\n", long(rep.njac)); printf("NGrad = %0ld\n", long(rep.ngrad)); printf("NHess = %0ld\n", long(rep.nhess));
minlmstate state; minlmreport rep; int i; ap::real_1d_array s; ap::real_1d_array x; ap::real_1d_array y; double fi; int n; int m; // // Example of solving polynomial approximation task using FJ scheme. // // Data points: // xi are random numbers from [-1,+1], // // Function being fitted: // yi = exp(xi) - sin(xi) - x^3/3 // // Function being minimized: // F(a,b,c) = // (a + b*x0 + c*x0^2 - y0)^2 + // (a + b*x1 + c*x1^2 - y1)^2 + ... // n = 3; s.setlength(n); for(i = 0; i <= n-1; i++) { s(i) = ap::randomreal()-0.5; } m = 100; x.setlength(m); y.setlength(m); for(i = 0; i <= m-1; i++) { x(i) = double(2*i)/double(m-1)-1; y(i) = exp(x(i))-sin(x(i))-x(i)*x(i)*x(i)/3; } // // Now S stores starting point, X and Y store points being fitted. // minlmcreatefj(n, m, s, state); minlmsetcond(state, 0.0, 0.0, 0.001, 0); while(minlmiteration(state)) { if( state.needf ) { state.f = 0; } for(i = 0; i <= m-1; i++) { // // "a" is stored in State.X[0] // "b" - State.X[1] // "c" - State.X[2] // fi = state.x(0)+state.x(1)*x(i)+state.x(2)*ap::sqr(x(i))-y(i); if( state.needf ) { // // F is equal to sum of fi squared. // state.f = state.f+ap::sqr(fi); } if( state.needfij ) { // // Fi // state.fi(i) = fi; // // dFi/da // state.j(i,0) = 1; // // dFi/db // state.j(i,1) = x(i); // // dFi/dc // state.j(i,2) = ap::sqr(x(i)); } } } minlmresults(state, s, rep); // // output results // printf("A = %4.2lf\n", double(s(0))); printf("B = %4.2lf\n", double(s(1))); printf("C = %4.2lf\n", double(s(2))); printf("TerminationType = %0ld (should be 2 - stopping when step is small enough)\n", long(rep.terminationtype));
mlpbase
unitmlpavgce
function/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if network solves regression task. -- ALGLIB -- Copyright 08.01.2009 by Bochkanov Sergey *************************************************************************/double mlpavgce(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints);
mlpavgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 11.03.2008 by Bochkanov Sergey *************************************************************************/double mlpavgerror(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints);
mlpavgrelerror
function/************************************************************************* Average relative error on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. -- ALGLIB -- Copyright 11.03.2008 by Bochkanov Sergey *************************************************************************/double mlpavgrelerror(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints);
mlpclserror
function/************************************************************************* Classification error -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/int mlpclserror(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize);
mlpcopy
function/************************************************************************* Copying of neural network INPUT PARAMETERS: Network1 - original OUTPUT PARAMETERS: Network2 - copy -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcopy(const multilayerperceptron& network1, multilayerperceptron& network2);
mlpcreate0
function/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers, with linear output layer. Network weights are filled with small random values. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcreate0(int nin, int nout, multilayerperceptron& network);
mlpcreate1
function/************************************************************************* Same as MLPCreate0, but with one hidden layer (NHid neurons) with non-linear activation function. Output layer is linear. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcreate1(int nin, int nhid, int nout, multilayerperceptron& network);
mlpcreate2
function/************************************************************************* Same as MLPCreate0, but with two hidden layers (NHid1 and NHid2 neurons) with non-linear activation function. Output layer is linear. $ALL -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcreate2(int nin, int nhid1, int nhid2, int nout, multilayerperceptron& network);
mlpcreateb0
function/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers with non-linear output layer. Network weights are filled with small random values. Activation function of the output layer takes values: (B, +INF), if D>=0 or (-INF, B), if D<0. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpcreateb0(int nin, int nout, double b, double d, multilayerperceptron& network);
mlpcreateb1
function/************************************************************************* Same as MLPCreateB0 but with non-linear hidden layer. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpcreateb1(int nin, int nhid, int nout, double b, double d, multilayerperceptron& network);
mlpcreateb2
function/************************************************************************* Same as MLPCreateB0 but with two non-linear hidden layers. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpcreateb2(int nin, int nhid1, int nhid2, int nout, double b, double d, multilayerperceptron& network);
mlpcreatec0
function/************************************************************************* Creates classifier network with NIn inputs and NOut possible classes. Network contains no hidden layers and linear output layer with SOFTMAX- normalization (so outputs sums up to 1.0 and converge to posterior probabilities). -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcreatec0(int nin, int nout, multilayerperceptron& network);
mlpcreatec1
function/************************************************************************* Same as MLPCreateC0, but with one non-linear hidden layer. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcreatec1(int nin, int nhid, int nout, multilayerperceptron& network);
mlpcreatec2
function/************************************************************************* Same as MLPCreateC0, but with two non-linear hidden layers. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpcreatec2(int nin, int nhid1, int nhid2, int nout, multilayerperceptron& network);
mlpcreater0
function/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers with non-linear output layer. Network weights are filled with small random values. Activation function of the output layer takes values [A,B]. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpcreater0(int nin, int nout, double a, double b, multilayerperceptron& network);
mlpcreater1
function/************************************************************************* Same as MLPCreateR0, but with non-linear hidden layer. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpcreater1(int nin, int nhid, int nout, double a, double b, multilayerperceptron& network);
mlpcreater2
function/************************************************************************* Same as MLPCreateR0, but with two non-linear hidden layers. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpcreater2(int nin, int nhid1, int nhid2, int nout, double a, double b, multilayerperceptron& network);
mlperror
function/************************************************************************* Error function for neural network, internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/double mlperror(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize);
mlperrorn
function/************************************************************************* Natural error function for neural network, internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/double mlperrorn(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize);
mlpgrad
function/************************************************************************* Gradient calculation. Internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpgrad(multilayerperceptron& network, const ap::real_1d_array& x, const ap::real_1d_array& desiredy, double& e, ap::real_1d_array& grad);
mlpgradbatch
function/************************************************************************* Batch gradient calculation. Internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpgradbatch(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize, double& e, ap::real_1d_array& grad);
mlpgradn
function/************************************************************************* Gradient calculation (natural error function). Internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpgradn(multilayerperceptron& network, const ap::real_1d_array& x, const ap::real_1d_array& desiredy, double& e, ap::real_1d_array& grad);
mlpgradnbatch
function/************************************************************************* Batch gradient calculation (natural error function). Internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpgradnbatch(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize, double& e, ap::real_1d_array& grad);
mlphessianbatch
function/************************************************************************* Batch Hessian calculation using R-algorithm. Internal subroutine. -- ALGLIB -- Copyright 26.01.2008 by Bochkanov Sergey. Hessian calculation based on R-algorithm described in "Fast Exact Multiplication by the Hessian", B. A. Pearlmutter, Neural Computation, 1994. *************************************************************************/void mlphessianbatch(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize, double& e, ap::real_1d_array& grad, ap::real_2d_array& h);
mlphessiannbatch
function/************************************************************************* Batch Hessian calculation (natural error function) using R-algorithm. Internal subroutine. -- ALGLIB -- Copyright 26.01.2008 by Bochkanov Sergey. Hessian calculation based on R-algorithm described in "Fast Exact Multiplication by the Hessian", B. A. Pearlmutter, Neural Computation, 1994. *************************************************************************/void mlphessiannbatch(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize, double& e, ap::real_1d_array& grad, ap::real_2d_array& h);
mlpinitpreprocessor
function/************************************************************************* Internal subroutine. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/void mlpinitpreprocessor(multilayerperceptron& network, const ap::real_2d_array& xy, int ssize);
mlpinternalprocessvector
function/************************************************************************* Internal subroutine, shouldn't be called by user. *************************************************************************/void mlpinternalprocessvector(const ap::integer_1d_array& structinfo, const ap::real_1d_array& weights, const ap::real_1d_array& columnmeans, const ap::real_1d_array& columnsigmas, ap::real_1d_array& neurons, ap::real_1d_array& dfdnet, const ap::real_1d_array& x, ap::real_1d_array& y);
mlpissoftmax
function/************************************************************************* Tells whether network is SOFTMAX-normalized (i.e. classifier) or not. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/bool mlpissoftmax(const multilayerperceptron& network);
mlpprocess
function/************************************************************************* Procesing INPUT PARAMETERS: Network - neural network X - input vector, array[0..NIn-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. Subroutine does not allocate memory for this vector, it is responsibility of a caller to allocate it. Array must be at least [0..NOut-1]. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpprocess(multilayerperceptron& network, const ap::real_1d_array& x, ap::real_1d_array& y);
Examples: mlp_process mlp_process_cls
mlpproperties
function/************************************************************************* Returns information about initialized network: number of inputs, outputs, weights. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/void mlpproperties(const multilayerperceptron& network, int& nin, int& nout, int& wcount);
mlprandomize
function/************************************************************************* Randomization of neural network weights -- ALGLIB -- Copyright 06.11.2007 by Bochkanov Sergey *************************************************************************/void mlprandomize(multilayerperceptron& network);
Examples: mlp_randomize
mlprandomizefull
function/************************************************************************* Randomization of neural network weights and standartisator -- ALGLIB -- Copyright 10.03.2008 by Bochkanov Sergey *************************************************************************/void mlprandomizefull(multilayerperceptron& network);
mlprelclserror
function/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: Network - network XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Works both for classifier networks and general purpose networks used as classifiers. -- ALGLIB -- Copyright 25.12.2008 by Bochkanov Sergey *************************************************************************/double mlprelclserror(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints);
mlprmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/double mlprmserror(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints);
mlpserialize
function/************************************************************************* Serialization of MultiLayerPerceptron strucure INPUT PARAMETERS: Network - original OUTPUT PARAMETERS: RA - array of real numbers which stores network, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 29.03.2008 by Bochkanov Sergey *************************************************************************/void mlpserialize(const multilayerperceptron& network, ap::real_1d_array& ra, int& rlen);
Examples: mlp_serialize
mlpunserialize
function/************************************************************************* Unserialization of MultiLayerPerceptron strucure INPUT PARAMETERS: RA - real array which stores network OUTPUT PARAMETERS: Network - restored network -- ALGLIB -- Copyright 29.03.2008 by Bochkanov Sergey *************************************************************************/void mlpunserialize(const ap::real_1d_array& ra, multilayerperceptron& network);
Examples: mlp_serialize
multilayerperceptron net; ap::real_1d_array x; ap::real_1d_array y; // // regression task with 2 inputs (independent variables) // and 2 outputs (dependent variables). // // network weights are initialized with small random values. // mlpcreate0(2, 2, net); x.setlength(2); y.setlength(2); x(0) = ap::randomreal()-0.5; x(1) = ap::randomreal()-0.5; mlpprocess(net, x, y); printf("Regression task\n"); printf("IN[0] = %5.2lf\n", double(x(0))); printf("IN[1] = %5.2lf\n", double(x(1))); printf("OUT[0] = %5.2lf\n", double(y(0))); printf("OUT[1] = %5.2lf\n", double(y(1)));
multilayerperceptron net; ap::real_1d_array x; ap::real_1d_array y; // // classification task with 2 inputs and 3 classes. // // network weights are initialized with small random values. // mlpcreatec0(2, 3, net); x.setlength(2); y.setlength(3); x(0) = ap::randomreal()-0.5; x(1) = ap::randomreal()-0.5; mlpprocess(net, x, y); // // output results // printf("Classification task\n"); printf("IN[0] = %5.2lf\n", double(x(0))); printf("IN[1] = %5.2lf\n", double(x(1))); printf("Prob(Class=0|IN) = %5.2lf\n", double(y(0))); printf("Prob(Class=1|IN) = %5.2lf\n", double(y(1))); printf("Prob(Class=2|IN) = %5.2lf\n", double(y(2)));
multilayerperceptron net; mlpcreate0(2, 1, net); mlprandomize(net);
multilayerperceptron network1; multilayerperceptron network2; multilayerperceptron network3; ap::real_1d_array x; ap::real_1d_array y; ap::real_1d_array r; int rlen; double v1; double v2; // // Generate two networks filled with small random values. // Use MLPSerialize/MLPUnserialize to make network copy. // mlpcreate0(1, 1, network1); mlpcreate0(1, 1, network2); mlpserialize(network1, r, rlen); mlpunserialize(r, network2); // // Now Network1 and Network2 should be identical. // Let's demonstrate it. // printf("Test serialization/unserialization\n"); x.setlength(1); y.setlength(1); x(0) = 2*ap::randomreal()-1; mlpprocess(network1, x, y); v1 = y(0); printf("Network1(X) = %0.2lf\n", double(y(0))); mlpprocess(network2, x, y); v2 = y(0); printf("Network2(X) = %0.2lf\n", double(y(0))); if( ap::fp_eq(v1,v2) ) { printf("Results are equal, OK.\n"); } else { printf("Results are not equal... Strange..."); }
mlpe
unitmlpensemble
structure/************************************************************************* Neural networks ensemble *************************************************************************/struct mlpensemble { ap::integer_1d_array structinfo; int ensemblesize; int nin; int nout; int wcount; bool issoftmax; bool postprocessing; ap::real_1d_array weights; ap::real_1d_array columnmeans; ap::real_1d_array columnsigmas; int serializedlen; ap::real_1d_array serializedmlp; ap::real_1d_array tmpweights; ap::real_1d_array tmpmeans; ap::real_1d_array tmpsigmas; ap::real_1d_array neurons; ap::real_1d_array dfdnet; ap::real_1d_array y; };
mlpeavgce
function/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if ensemble solves regression task. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/double mlpeavgce(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints);
mlpeavgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/double mlpeavgerror(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints);
mlpeavgrelerror
function/************************************************************************* Average relative error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task it means average relative error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/double mlpeavgrelerror(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints);
mlpebagginglbfgs
function/************************************************************************* Training neural networks ensemble using bootstrap aggregating (bagging). L-BFGS algorithm is used as base training method. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. WStep - stopping criterion, same as in MLPTrainLBFGS MaxIts - stopping criterion, same as in MLPTrainLBFGS OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -8, if both WStep=0 and MaxIts=0 * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpebagginglbfgs(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints, double decay, int restarts, double wstep, int maxits, int& info, mlpreport& rep, mlpcvreport& ooberrors);
mlpebagginglm
function/************************************************************************* Training neural networks ensemble using bootstrap aggregating (bagging). Modified Levenberg-Marquardt algorithm is used as base training method. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpebagginglm(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints, double decay, int restarts, int& info, mlpreport& rep, mlpcvreport& ooberrors);
mlpecopy
function/************************************************************************* Copying of MLPEnsemble strucure INPUT PARAMETERS: Ensemble1 - original OUTPUT PARAMETERS: Ensemble2 - copy -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecopy(const mlpensemble& ensemble1, mlpensemble& ensemble2);
mlpecreate0
function/************************************************************************* Like MLPCreate0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreate0(int nin, int nout, int ensemblesize, mlpensemble& ensemble);
mlpecreate1
function/************************************************************************* Like MLPCreate1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreate1(int nin, int nhid, int nout, int ensemblesize, mlpensemble& ensemble);
mlpecreate2
function/************************************************************************* Like MLPCreate2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreate2(int nin, int nhid1, int nhid2, int nout, int ensemblesize, mlpensemble& ensemble);
mlpecreateb0
function/************************************************************************* Like MLPCreateB0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreateb0(int nin, int nout, double b, double d, int ensemblesize, mlpensemble& ensemble);
mlpecreateb1
function/************************************************************************* Like MLPCreateB1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreateb1(int nin, int nhid, int nout, double b, double d, int ensemblesize, mlpensemble& ensemble);
mlpecreateb2
function/************************************************************************* Like MLPCreateB2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreateb2(int nin, int nhid1, int nhid2, int nout, double b, double d, int ensemblesize, mlpensemble& ensemble);
mlpecreatec0
function/************************************************************************* Like MLPCreateC0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreatec0(int nin, int nout, int ensemblesize, mlpensemble& ensemble);
mlpecreatec1
function/************************************************************************* Like MLPCreateC1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreatec1(int nin, int nhid, int nout, int ensemblesize, mlpensemble& ensemble);
mlpecreatec2
function/************************************************************************* Like MLPCreateC2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreatec2(int nin, int nhid1, int nhid2, int nout, int ensemblesize, mlpensemble& ensemble);
mlpecreatefromnetwork
function/************************************************************************* Creates ensemble from network. Only network geometry is copied. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreatefromnetwork(const multilayerperceptron& network, int ensemblesize, mlpensemble& ensemble);
mlpecreater0
function/************************************************************************* Like MLPCreateR0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreater0(int nin, int nout, double a, double b, int ensemblesize, mlpensemble& ensemble);
mlpecreater1
function/************************************************************************* Like MLPCreateR1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreater1(int nin, int nhid, int nout, double a, double b, int ensemblesize, mlpensemble& ensemble);
mlpecreater2
function/************************************************************************* Like MLPCreateR2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/void mlpecreater2(int nin, int nhid1, int nhid2, int nout, double a, double b, int ensemblesize, mlpensemble& ensemble);
mlpeissoftmax
function/************************************************************************* Return normalization type (whether ensemble is SOFTMAX-normalized or not). -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/bool mlpeissoftmax(const mlpensemble& ensemble);
mlpeprocess
function/************************************************************************* Procesing INPUT PARAMETERS: Ensemble- neural networks ensemble X - input vector, array[0..NIn-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. Subroutine does not allocate memory for this vector, it is responsibility of a caller to allocate it. Array must be at least [0..NOut-1]. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpeprocess(mlpensemble& ensemble, const ap::real_1d_array& x, ap::real_1d_array& y);
mlpeproperties
function/************************************************************************* Return ensemble properties (number of inputs and outputs). -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpeproperties(const mlpensemble& ensemble, int& nin, int& nout);
mlperandomize
function/************************************************************************* Randomization of MLP ensemble -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlperandomize(mlpensemble& ensemble);
mlperelclserror
function/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Works both for classifier betwork and for regression networks which are used as classifiers. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/double mlperelclserror(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints);
mlpermserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/double mlpermserror(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints);
mlpeserialize
function/************************************************************************* Serialization of MLPEnsemble strucure INPUT PARAMETERS: Ensemble- original OUTPUT PARAMETERS: RA - array of real numbers which stores ensemble, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpeserialize(mlpensemble& ensemble, ap::real_1d_array& ra, int& rlen);
mlpetraines
function/************************************************************************* Training neural networks ensemble using early stopping. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 6, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/void mlpetraines(mlpensemble& ensemble, const ap::real_2d_array& xy, int npoints, double decay, int restarts, int& info, mlpreport& rep);
mlpeunserialize
function/************************************************************************* Unserialization of MLPEnsemble strucure INPUT PARAMETERS: RA - real array which stores ensemble OUTPUT PARAMETERS: Ensemble- restored structure -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/void mlpeunserialize(const ap::real_1d_array& ra, mlpensemble& ensemble);
mlptrain
unitmlpcvreport
structure/************************************************************************* Cross-validation estimates of generalization error *************************************************************************/struct mlpcvreport { double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; };
mlpreport
structure/************************************************************************* Training report: * NGrad - number of gradient calculations * NHess - number of Hessian calculations * NCholesky - number of Cholesky decompositions *************************************************************************/struct mlpreport { int ngrad; int nhess; int ncholesky; };
mlpkfoldcvlbfgs
function/************************************************************************* Cross-validation estimate of generalization error. Base algorithm - L-BFGS. INPUT PARAMETERS: Network - neural network with initialized geometry. Network is not changed during cross-validation - it is used only as a representative of its architecture. XY - training set. SSize - training set size Decay - weight decay, same as in MLPTrainLBFGS Restarts - number of restarts, >0. restarts are counted for each partition separately, so total number of restarts will be Restarts*FoldsCount. WStep - stopping criterion, same as in MLPTrainLBFGS MaxIts - stopping criterion, same as in MLPTrainLBFGS FoldsCount - number of folds in k-fold cross-validation, 2<=FoldsCount<=SSize. recommended value: 10. OUTPUT PARAMETERS: Info - return code, same as in MLPTrainLBFGS Rep - report, same as in MLPTrainLM/MLPTrainLBFGS CVRep - generalization error estimates -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/void mlpkfoldcvlbfgs(const multilayerperceptron& network, const ap::real_2d_array& xy, int npoints, double decay, int restarts, double wstep, int maxits, int foldscount, int& info, mlpreport& rep, mlpcvreport& cvrep);
mlpkfoldcvlm
function/************************************************************************* Cross-validation estimate of generalization error. Base algorithm - Levenberg-Marquardt. INPUT PARAMETERS: Network - neural network with initialized geometry. Network is not changed during cross-validation - it is used only as a representative of its architecture. XY - training set. SSize - training set size Decay - weight decay, same as in MLPTrainLBFGS Restarts - number of restarts, >0. restarts are counted for each partition separately, so total number of restarts will be Restarts*FoldsCount. FoldsCount - number of folds in k-fold cross-validation, 2<=FoldsCount<=SSize. recommended value: 10. OUTPUT PARAMETERS: Info - return code, same as in MLPTrainLBFGS Rep - report, same as in MLPTrainLM/MLPTrainLBFGS CVRep - generalization error estimates -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/void mlpkfoldcvlm(const multilayerperceptron& network, const ap::real_2d_array& xy, int npoints, double decay, int restarts, int foldscount, int& info, mlpreport& rep, mlpcvreport& cvrep);
mlptraines
function/************************************************************************* Neural network training using early stopping (base algorithm - L-BFGS with regularization). INPUT PARAMETERS: Network - neural network with initialized geometry TrnXY - training set TrnSize - training set size ValXY - validation set ValSize - validation set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1, ...). * 2, task has been solved, stopping criterion met - sufficiently small step size. Not expected (we use EARLY stopping) but possible and not an error. * 6, task has been solved, stopping criterion met - increasing of validation set error. Rep - training report NOTE: Algorithm stops if validation set error increases for a long enough or step size is small enought (there are task where validation set may decrease for eternity). In any case solution returned corresponds to the minimum of validation set error. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/void mlptraines(multilayerperceptron& network, const ap::real_2d_array& trnxy, int trnsize, const ap::real_2d_array& valxy, int valsize, double decay, int restarts, int& info, mlpreport& rep);
mlptrainlbfgs
function/************************************************************************* Neural network training using L-BFGS algorithm with regularization. Subroutine trains neural network with restarts from random positions. Algorithm is well suited for problems of any dimensionality (memory requirements and step complexity are linear by weights number). INPUT PARAMETERS: Network - neural network with initialized geometry XY - training set NPoints - training set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. WStep - stopping criterion. Algorithm stops if step size is less than WStep. Recommended value - 0.01. Zero step size means stopping after MaxIts iterations. MaxIts - stopping criterion. Algorithm stops after MaxIts iterations (NOT gradient calculations). Zero MaxIts means stopping when step is sufficiently small. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -8, if both WStep=0 and MaxIts=0 * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/void mlptrainlbfgs(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints, double decay, int restarts, double wstep, int maxits, int& info, mlpreport& rep);
mlptrainlm
function/************************************************************************* Neural network training using modified Levenberg-Marquardt with exact Hessian calculation and regularization. Subroutine trains neural network with restarts from random positions. Algorithm is well suited for small and medium scale problems (hundreds of weights). INPUT PARAMETERS: Network - neural network with initialized geometry XY - training set NPoints - training set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -9, if internal matrix inverse subroutine failed * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/void mlptrainlm(multilayerperceptron& network, const ap::real_2d_array& xy, int npoints, double decay, int restarts, int& info, mlpreport& rep);
nearestneighbor
unitkdtreebuild
function/************************************************************************* KD-tree creation This subroutine creates KD-tree from set of X-values and optional Y-values INPUT PARAMETERS XY - dataset, array[0..N-1,0..NX+NY-1]. one row corresponds to one point. first NX columns contain X-values, next NY (NY may be zero) columns may contain associated Y-values N - number of points, N>=1 NX - space dimension, NX>=1. NY - number of optional Y-values, NY>=0. NormType- norm type: * 0 denotes infinity-norm * 1 denotes 1-norm * 2 denotes 2-norm (Euclidean norm) OUTPUT PARAMETERS KDT - KD-tree NOTES 1. KD-tree creation have O(N*logN) complexity and O(N*(2*NX+NY)) memory requirements. 2. Although KD-trees may be used with any combination of N and NX, they are more efficient than brute-force search only when N >> 4^NX. So they are most useful in low-dimensional tasks (NX=2, NX=3). NX=1 is another inefficient case, because simple binary search (without additional structures) is much more efficient in such tasks than KD-trees. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void kdtreebuild(const ap::real_2d_array& xy, int n, int nx, int ny, int normtype, kdtree& kdt);
kdtreebuildtagged
function/************************************************************************* KD-tree creation This subroutine creates KD-tree from set of X-values, integer tags and optional Y-values INPUT PARAMETERS XY - dataset, array[0..N-1,0..NX+NY-1]. one row corresponds to one point. first NX columns contain X-values, next NY (NY may be zero) columns may contain associated Y-values Tags - tags, array[0..N-1], contains integer tags associated with points. N - number of points, N>=1 NX - space dimension, NX>=1. NY - number of optional Y-values, NY>=0. NormType- norm type: * 0 denotes infinity-norm * 1 denotes 1-norm * 2 denotes 2-norm (Euclidean norm) OUTPUT PARAMETERS KDT - KD-tree NOTES 1. KD-tree creation have O(N*logN) complexity and O(N*(2*NX+NY)) memory requirements. 2. Although KD-trees may be used with any combination of N and NX, they are more efficient than brute-force search only when N >> 4^NX. So they are most useful in low-dimensional tasks (NX=2, NX=3). NX=1 is another inefficient case, because simple binary search (without additional structures) is much more efficient in such tasks than KD-trees. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void kdtreebuildtagged(const ap::real_2d_array& xy, const ap::integer_1d_array& tags, int n, int nx, int ny, int normtype, kdtree& kdt);
kdtreequeryaknn
function/************************************************************************* K-NN query: approximate K nearest neighbors INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned Eps - approximation factor, Eps>=0. eps-approximate nearest neighbor is a neighbor whose distance from X is at most (1+eps) times distance of true nearest neighbor. RESULT number of actual neighbors found (either K or N, if K>N). NOTES significant performance gain may be achieved only when Eps is is on the order of magnitude of 1 or larger. This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/int kdtreequeryaknn(kdtree& kdt, const ap::real_1d_array& x, int k, bool selfmatch, double eps);
kdtreequeryknn
function/************************************************************************* K-NN query: K nearest neighbors INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned RESULT number of actual neighbors found (either K or N, if K>N). This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/int kdtreequeryknn(kdtree& kdt, const ap::real_1d_array& x, int k, bool selfmatch);
kdtreequeryresultsdistances
function/************************************************************************* Distances from last query INPUT PARAMETERS KDT - KD-tree R - pre-allocated array, at least K elements OUTPUT PARAMETERS R - first K elements are filled with distances (in corresponding norm) K - number of points NOTE points are ordered by distance from the query point (first = closest) SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void kdtreequeryresultsdistances(const kdtree& kdt, ap::real_1d_array& r, int& k);
kdtreequeryresultstags
function/************************************************************************* point tags from last query INPUT PARAMETERS KDT - KD-tree Tags - pre-allocated array, at least K elements OUTPUT PARAMETERS Tags - first K elements are filled with tags associated with points, or, when no tags were supplied, with zeros K - number of points NOTE points are ordered by distance from the query point (first = closest) SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void kdtreequeryresultstags(const kdtree& kdt, ap::integer_1d_array& tags, int& k);
kdtreequeryresultsx
function/************************************************************************* X-values from last query INPUT PARAMETERS KDT - KD-tree X - pre-allocated array, at least K rows, at least NX columns OUTPUT PARAMETERS X - K rows are filled with X-values K - number of points NOTE points are ordered by distance from the query point (first = closest) SEE ALSO * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void kdtreequeryresultsx(const kdtree& kdt, ap::real_2d_array& x, int& k);
kdtreequeryresultsxy
function/************************************************************************* X- and Y-values from last query INPUT PARAMETERS KDT - KD-tree XY - pre-allocated array, at least K rows, at least NX+NY columns OUTPUT PARAMETERS X - K rows are filled with points: first NX columns with X-values, next NY columns - with Y-values. K - number of points NOTE points are ordered by distance from the query point (first = closest) SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void kdtreequeryresultsxy(const kdtree& kdt, ap::real_2d_array& xy, int& k);
kdtreequeryrnn
function/************************************************************************* R-NN query: all points within R-sphere centered at X INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. R - radius of sphere (in corresponding norm), R>0 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned RESULT number of neighbors found, >=0 This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain actual results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/int kdtreequeryrnn(kdtree& kdt, const ap::real_1d_array& x, double r, bool selfmatch);
normaldistr
uniterf
function/************************************************************************* Error function The integral is x - 2 | | 2 erf(x) = -------- | exp( - t ) dt. sqrt(pi) | | - 0 For 0 <= |x| < 1, erf(x) = x * P4(x**2)/Q5(x**2); otherwise erf(x) = 1 - erfc(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 3.7e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/double erf(double x);
erfc
function/************************************************************************* Complementary error function 1 - erf(x) = inf. - 2 | | 2 erfc(x) = -------- | exp( - t ) dt sqrt(pi) | | - x For small x, erfc(x) = 1 - erf(x); otherwise rational approximations are computed. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,26.6417 30000 5.7e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/double erfc(double x);
inverf
function/************************************************************************* Inverse of the error function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/double inverf(double e);
invnormaldistribution
function/************************************************************************* Inverse of Normal distribution function Returns the argument, x, for which the area under the Gaussian probability density function (integrated from minus infinity to x) is equal to y. For small arguments 0 < y < exp(-2), the program computes z = sqrt( -2.0 * log(y) ); then the approximation is x = z - log(z)/z - (1/z) P(1/z) / Q(1/z). There are two rational functions P/Q, one for 0 < y < exp(-32) and the other for y up to exp(-2). For larger arguments, w = y - 0.5, and x/sqrt(2pi) = w + w**3 R(w**2)/S(w**2)). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0.125, 1 20000 7.2e-16 1.3e-16 IEEE 3e-308, 0.135 50000 4.6e-16 9.8e-17 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/double invnormaldistribution(double y0);
normaldistribution
function/************************************************************************* Normal distribution function Returns the area under the Gaussian probability density function, integrated from minus infinity to x: x - 1 | | 2 ndtr(x) = --------- | exp( - t /2 ) dt sqrt(2pi) | | - -inf. = ( 1 + erf(z) ) / 2 = erfc(z) / 2 where z = x/sqrt(2). Computation is via the functions erf and erfc. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE -13,0 30000 3.4e-14 6.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/double normaldistribution(double x);
odesolver
unitodesolveriteration
function/************************************************************************* One iteration of ODE solver. Called after inialization of State structure with OdeSolverXXX subroutine. See HTML docs for examples. INPUT PARAMETERS: State - structure which stores algorithm state between subsequent calls and which is used for reverse communication. Must be initialized with OdeSolverXXX() call first. If subroutine returned False, algorithm have finished its work. If subroutine returned True, then user should: * calculate F(State.X, State.Y) * store it in State.DY Here State.X is real, State.Y and State.DY are arrays[0..N-1] of reals. -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/bool odesolveriteration(odesolverstate& state);
Examples: ode_example1 ode_example2
odesolverresults
function/************************************************************************* ODE solver results Called after OdeSolverIteration returned False. INPUT PARAMETERS: State - algorithm state (used by OdeSolverIteration). OUTPUT PARAMETERS: M - number of tabulated values, M>=1 XTbl - array[0..M-1], values of X YTbl - array[0..M-1,0..N-1], values of Y in X[i] Rep - solver report: * Rep.TerminationType completetion code: * -2 X is not ordered by ascending/descending or there are non-distinct X[], i.e. X[i]=X[i+1] * -1 incorrect parameters were specified * 1 task has been solved * Rep.NFEV contains number of function calculations -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/void odesolverresults(const odesolverstate& state, int& m, ap::real_1d_array& xtbl, ap::real_2d_array& ytbl, odesolverreport& rep);
Examples: ode_example1 ode_example2
odesolverrkck
function/************************************************************************* Cash-Karp adaptive ODE solver. This subroutine solves ODE Y'=f(Y,x) with initial conditions Y(xs)=Ys (here Y may be single variable or vector of N variables). INPUT PARAMETERS: Y - initial conditions, array[0..N-1]. contains values of Y[] at X[0] N - system size X - points at which Y should be tabulated, array[0..M-1] integrations starts at X[0], ends at X[M-1], intermediate values at X[i] are returned too. SHOULD BE ORDERED BY ASCENDING OR BY DESCENDING!!!! M - number of intermediate points + first point + last point: * M>2 means that you need both Y(X[M-1]) and M-2 values at intermediate points * M=2 means that you want just to integrate from X[0] to X[1] and don't interested in intermediate values. * M=1 means that you don't want to integrate :) it is degenerate case, but it will be handled correctly. * M<1 means error Eps - tolerance (absolute/relative error on each step will be less than Eps). When passing: * Eps>0, it means desired ABSOLUTE error * Eps<0, it means desired RELATIVE error. Relative errors are calculated with respect to maximum values of Y seen so far. Be careful to use this criterion when starting from Y[] that are close to zero. H - initial step lenth, it will be adjusted automatically after the first step. If H=0, step will be selected automatically (usualy it will be equal to 0.001 of min(x[i]-x[j])). OUTPUT PARAMETERS State - structure which stores algorithm state between subsequent calls of OdeSolverIteration. Used for reverse communication. This structure should be passed to the OdeSolverIteration subroutine. SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKIteration, AutoGKResults. -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/void odesolverrkck(const ap::real_1d_array& y, int n, const ap::real_1d_array& x, int m, double eps, double h, odesolverstate& state);
Examples: ode_example1 ode_example2
ap::real_1d_array x; ap::real_1d_array y; ap::real_2d_array ytbl; double eps; double h; int m; int i; odesolverstate state; odesolverreport rep; // // ODESolver unit is used to solve simple ODE: // y' = y, y(0) = 1. // // Its solution is well known in academic circles :) // // No intermediate values are calculated, // just starting and final points. // y.setlength(1); y(0) = 1; x.setlength(2); x(0) = 0; x(1) = 1; eps = 1.0E-4; h = 0.01; odesolverrkck(y, 1, x, 2, eps, h, state); while(odesolveriteration(state)) { state.dy(0) = state.y(0); } odesolverresults(state, m, x, ytbl, rep); printf(" X Y(X)\n"); for(i = 0; i <= m-1; i++) { printf("%5.3lf %5.3lf\n", double(x(i)), double(ytbl(i,0))); }
ap::real_1d_array x; ap::real_1d_array y; ap::real_2d_array ytbl; double eps; double h; int m; int i; odesolverstate state; odesolverreport rep; // // ODESolver unit is used to solve simple ODE: // y'' = -y, y(0) = 0, y'(0)=1. // // This ODE may be written as first-order system: // y' = z // z' = -y // // Its solution is well known in academic circles :) // // Three intermediate values are calculated, // plus starting and final points. // y.setlength(2); y(0) = 0; y(1) = 1; x.setlength(5); x(0) = ap::pi()*0/4; x(1) = ap::pi()*1/4; x(2) = ap::pi()*2/4; x(3) = ap::pi()*3/4; x(4) = ap::pi()*4/4; eps = 1.0E-8; h = 0.01; odesolverrkck(y, 2, x, 5, eps, h, state); while(odesolveriteration(state)) { state.dy(0) = state.y(1); state.dy(1) = -state.y(0); } odesolverresults(state, m, x, ytbl, rep); printf(" X Y(X) Error\n"); for(i = 0; i <= m-1; i++) { printf("%6.3lf %6.3lf %8.1le\n", double(x(i)), double(ytbl(i,0)), double(fabs(ytbl(i,0)-sin(x(i))))); }
ortfac
unitcmatrixlq
function/************************************************************************* LQ decomposition of a rectangular complex matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and L in compact form Tau - array of scalar factors which are used to form matrix Q. Array whose indexes range within [0.. Min(M,N)-1] Matrix A is represented as A = LQ, where Q is an orthogonal matrix of size MxM, L - lower triangular (or lower trapezoid) matrix of size MxN. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/void cmatrixlq(ap::complex_2d_array& a, int m, int n, ap::complex_1d_array& tau);
cmatrixlqunpackl
function/************************************************************************* Unpacking of matrix L from the LQ decomposition of a matrix A Input parameters: A - matrices Q and L in compact form. Output of CMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: L - matrix L, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void cmatrixlqunpackl(const ap::complex_2d_array& a, int m, int n, ap::complex_2d_array& l);
cmatrixlqunpackq
function/************************************************************************* Partial unpacking of matrix Q from LQ decomposition of a complex matrix A. Input parameters: A - matrices Q and R in compact form. Output of CMatrixLQ subroutine . M - number of rows in matrix A. M>=0. N - number of columns in matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of CMatrixLQ subroutine . QRows - required number of rows in matrix Q. N>=QColumns>=0. Output parameters: Q - first QRows rows of matrix Q. Array whose index ranges within [0..QRows-1, 0..N-1]. If QRows=0, array isn't changed. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void cmatrixlqunpackq(const ap::complex_2d_array& a, int m, int n, const ap::complex_1d_array& tau, int qrows, ap::complex_2d_array& q);
cmatrixqr
function/************************************************************************* QR decomposition of a rectangular complex matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and R in compact form Tau - array of scalar factors which are used to form matrix Q. Array whose indexes range within [0.. Min(M,N)-1] Matrix A is represented as A = QR, where Q is an orthogonal matrix of size MxM, R - upper triangular (or upper trapezoid) matrix of size MxN. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/void cmatrixqr(ap::complex_2d_array& a, int m, int n, ap::complex_1d_array& tau);
cmatrixqrunpackq
function/************************************************************************* Partial unpacking of matrix Q from QR decomposition of a complex matrix A. Input parameters: A - matrices Q and R in compact form. Output of CMatrixQR subroutine . M - number of rows in matrix A. M>=0. N - number of columns in matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of CMatrixQR subroutine . QColumns - required number of columns in matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array whose index ranges within [0..M-1, 0..QColumns-1]. If QColumns=0, array isn't changed. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void cmatrixqrunpackq(const ap::complex_2d_array& a, int m, int n, const ap::complex_1d_array& tau, int qcolumns, ap::complex_2d_array& q);
cmatrixqrunpackr
function/************************************************************************* Unpacking of matrix R from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of CMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: R - matrix R, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void cmatrixqrunpackr(const ap::complex_2d_array& a, int m, int n, ap::complex_2d_array& r);
hmatrixtd
function/************************************************************************* Reduction of a Hermitian matrix which is given by its higher or lower triangular part to a real tridiagonal matrix using unitary similarity transformation: Q'*A*Q = T. Input parameters: A - matrix to be transformed array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then matrix A is given by its upper triangle, and the lower triangle is not used and not modified by the algorithm, and vice versa if IsUpper = False. Output parameters: A - matrices T and Q in compact form (see lower) Tau - array of factors which are forming matrices H(i) array with elements [0..N-2]. D - main diagonal of real symmetric matrix T. array with elements [0..N-1]. E - secondary diagonal of real symmetric matrix T. array with elements [0..N-2]. If IsUpper=True, the matrix Q is represented as a product of elementary reflectors Q = H(n-2) . . . H(2) H(0). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n-1) = 0, v(i) = 1, v(0:i-1) is stored on exit in A(0:i-1,i+1), and tau in TAU(i). If IsUpper=False, the matrix Q is represented as a product of elementary reflectors Q = H(0) H(2) . . . H(n-2). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) is stored on exit in A(i+2:n-1,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = 'U': if UPLO = 'L': ( d e v1 v2 v3 ) ( d ) ( d e v2 v3 ) ( e d ) ( d e v3 ) ( v0 e d ) ( d e ) ( v0 v1 e d ) ( d ) ( v0 v1 v2 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/void hmatrixtd(ap::complex_2d_array& a, int n, bool isupper, ap::complex_1d_array& tau, ap::real_1d_array& d, ap::real_1d_array& e);
hmatrixtdunpackq
function/************************************************************************* Unpacking matrix Q which reduces a Hermitian matrix to a real tridiagonal form. Input parameters: A - the result of a HMatrixTD subroutine N - size of matrix A. IsUpper - storage format (a parameter of HMatrixTD subroutine) Tau - the result of a HMatrixTD subroutine Output parameters: Q - transformation matrix. array with elements [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/void hmatrixtdunpackq(const ap::complex_2d_array& a, const int& n, const bool& isupper, const ap::complex_1d_array& tau, ap::complex_2d_array& q);
rmatrixbd
function/************************************************************************* Reduction of a rectangular matrix to bidiagonal form The algorithm reduces the rectangular matrix A to bidiagonal form by orthogonal transformations P and Q: A = Q*B*P. Input parameters: A - source matrix. array[0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q, B, P in compact form (see below). TauQ - scalar factors which are used to form matrix Q. TauP - scalar factors which are used to form matrix P. The main diagonal and one of the secondary diagonals of matrix A are replaced with bidiagonal matrix B. Other elements contain elementary reflections which form MxM matrix Q and NxN matrix P, respectively. If M>=N, B is the upper bidiagonal MxN matrix and is stored in the corresponding elements of matrix A. Matrix Q is represented as a product of elementary reflections Q = H(0)*H(1)*...*H(n-1), where H(i) = 1-tau*v*v'. Here tau is a scalar which is stored in TauQ[i], and vector v has the following structure: v(0:i-1)=0, v(i)=1, v(i+1:m-1) is stored in elements A(i+1:m-1,i). Matrix P is as follows: P = G(0)*G(1)*...*G(n-2), where G(i) = 1 - tau*u*u'. Tau is stored in TauP[i], u(0:i)=0, u(i+1)=1, u(i+2:n-1) is stored in elements A(i,i+2:n-1). If M<N, B is the lower bidiagonal MxN matrix and is stored in the corresponding elements of matrix A. Q = H(0)*H(1)*...*H(m-2), where H(i) = 1 - tau*v*v', tau is stored in TauQ, v(0:i)=0, v(i+1)=1, v(i+2:m-1) is stored in elements A(i+2:m-1,i). P = G(0)*G(1)*...*G(m-1), G(i) = 1-tau*u*u', tau is stored in TauP, u(0:i-1)=0, u(i)=1, u(i+1:n-1) is stored in A(i,i+1:n-1). EXAMPLE: m=6, n=5 (m > n): m=5, n=6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) Here vi and ui are vectors which form H(i) and G(i), and d and e - are the diagonal and off-diagonal elements of matrix B. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994. Sergey Bochkanov, ALGLIB project, translation from FORTRAN to pseudocode, 2007-2010. *************************************************************************/void rmatrixbd(ap::real_2d_array& a, int m, int n, ap::real_1d_array& tauq, ap::real_1d_array& taup);
rmatrixbdmultiplybyp
function/************************************************************************* Multiplication by matrix P which reduces matrix A to bidiagonal form. The algorithm allows pre- or post-multiply by P or P'. Input parameters: QP - matrices Q and P in compact form. Output of RMatrixBD subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUP - scalar factors which are used to form P. Output of RMatrixBD subroutine. Z - multiplied matrix. Array whose indexes range within [0..ZRows-1,0..ZColumns-1]. ZRows - number of rows in matrix Z. If FromTheRight=False, ZRows=N, otherwise ZRows can be arbitrary. ZColumns - number of columns in matrix Z. If FromTheRight=True, ZColumns=N, otherwise ZColumns can be arbitrary. FromTheRight - pre- or post-multiply. DoTranspose - multiply by P or P'. Output parameters: Z - product of Z and P. Array whose indexes range within [0..ZRows-1,0..ZColumns-1]. If ZRows=0 or ZColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixbdmultiplybyp(const ap::real_2d_array& qp, int m, int n, const ap::real_1d_array& taup, ap::real_2d_array& z, int zrows, int zcolumns, bool fromtheright, bool dotranspose);
rmatrixbdmultiplybyq
function/************************************************************************* Multiplication by matrix Q which reduces matrix A to bidiagonal form. The algorithm allows pre- or post-multiply by Q or Q'. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUQ - scalar factors which are used to form Q. Output of ToBidiagonal subroutine. Z - multiplied matrix. array[0..ZRows-1,0..ZColumns-1] ZRows - number of rows in matrix Z. If FromTheRight=False, ZRows=M, otherwise ZRows can be arbitrary. ZColumns - number of columns in matrix Z. If FromTheRight=True, ZColumns=M, otherwise ZColumns can be arbitrary. FromTheRight - pre- or post-multiply. DoTranspose - multiply by Q or Q'. Output parameters: Z - product of Z and Q. Array[0..ZRows-1,0..ZColumns-1] If ZRows=0 or ZColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixbdmultiplybyq(const ap::real_2d_array& qp, int m, int n, const ap::real_1d_array& tauq, ap::real_2d_array& z, int zrows, int zcolumns, bool fromtheright, bool dotranspose);
rmatrixbdunpackdiagonals
function/************************************************************************* Unpacking of the main and secondary diagonals of bidiagonal decomposition of matrix A. Input parameters: B - output of RMatrixBD subroutine. M - number of rows in matrix B. N - number of columns in matrix B. Output parameters: IsUpper - True, if the matrix is upper bidiagonal. otherwise IsUpper is False. D - the main diagonal. Array whose index ranges within [0..Min(M,N)-1]. E - the secondary diagonal (upper or lower, depending on the value of IsUpper). Array index ranges within [0..Min(M,N)-1], the last element is not used. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixbdunpackdiagonals(const ap::real_2d_array& b, int m, int n, bool& isupper, ap::real_1d_array& d, ap::real_1d_array& e);
rmatrixbdunpackpt
function/************************************************************************* Unpacking matrix P which reduces matrix A to bidiagonal form. The subroutine returns transposed matrix P. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUP - scalar factors which are used to form P. Output of ToBidiagonal subroutine. PTRows - required number of rows of matrix P^T. N >= PTRows >= 0. Output parameters: PT - first PTRows columns of matrix P^T Array[0..PTRows-1, 0..N-1] If PTRows=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixbdunpackpt(const ap::real_2d_array& qp, int m, int n, const ap::real_1d_array& taup, int ptrows, ap::real_2d_array& pt);
rmatrixbdunpackq
function/************************************************************************* Unpacking matrix Q which reduces a matrix to bidiagonal form. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUQ - scalar factors which are used to form Q. Output of ToBidiagonal subroutine. QColumns - required number of columns in matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array[0..M-1, 0..QColumns-1] If QColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixbdunpackq(const ap::real_2d_array& qp, int m, int n, const ap::real_1d_array& tauq, int qcolumns, ap::real_2d_array& q);
rmatrixhessenberg
function/************************************************************************* Reduction of a square matrix to upper Hessenberg form: Q'*A*Q = H, where Q is an orthogonal matrix, H - Hessenberg matrix. Input parameters: A - matrix A with elements [0..N-1, 0..N-1] N - size of matrix A. Output parameters: A - matrices Q and P in compact form (see below). Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0..N-2] Matrix H is located on the main diagonal, on the lower secondary diagonal and above the main diagonal of matrix A. The elements which are used to form matrix Q are situated in array Tau and below the lower secondary diagonal of matrix A as follows: Matrix Q is represented as a product of elementary reflections Q = H(0)*H(2)*...*H(n-2), where each H(i) is given by H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - is a real vector, so that v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) stored in A(i+2:n-1,i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/void rmatrixhessenberg(ap::real_2d_array& a, int n, ap::real_1d_array& tau);
rmatrixhessenbergunpackh
function/************************************************************************* Unpacking matrix H (the result of matrix A reduction to upper Hessenberg form) Input parameters: A - output of RMatrixHessenberg subroutine. N - size of matrix A. Output parameters: H - matrix H. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixhessenbergunpackh(const ap::real_2d_array& a, int n, ap::real_2d_array& h);
rmatrixhessenbergunpackq
function/************************************************************************* Unpacking matrix Q which reduces matrix A to upper Hessenberg form Input parameters: A - output of RMatrixHessenberg subroutine. N - size of matrix A. Tau - scalar factors which are used to form Q. Output of RMatrixHessenberg subroutine. Output parameters: Q - matrix Q. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/void rmatrixhessenbergunpackq(const ap::real_2d_array& a, int n, const ap::real_1d_array& tau, ap::real_2d_array& q);
rmatrixlq
function/************************************************************************* LQ decomposition of a rectangular matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices L and Q in compact form (see below) Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0..Min(M,N)-1]. Matrix A is represented as A = LQ, where Q is an orthogonal matrix of size MxM, L - lower triangular (or lower trapezoid) matrix of size M x N. The elements of matrix L are located on and below the main diagonal of matrix A. The elements which are located in Tau array and above the main diagonal of matrix A are used to form matrix Q as follows: Matrix Q is represented as a product of elementary reflections Q = H(k-1)*H(k-2)*...*H(1)*H(0), where k = min(m,n), and each H(i) is of the form H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - real vector, so that v(0:i-1)=0, v(i) = 1, v(i+1:n-1) stored in A(i,i+1:n-1). -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixlq(ap::real_2d_array& a, int m, int n, ap::real_1d_array& tau);
rmatrixlqunpackl
function/************************************************************************* Unpacking of matrix L from the LQ decomposition of a matrix A Input parameters: A - matrices Q and L in compact form. Output of RMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: L - matrix L, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixlqunpackl(const ap::real_2d_array& a, int m, int n, ap::real_2d_array& l);
rmatrixlqunpackq
function/************************************************************************* Partial unpacking of matrix Q from the LQ decomposition of a matrix A Input parameters: A - matrices L and Q in compact form. Output of RMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of the RMatrixLQ subroutine. QRows - required number of rows in matrix Q. N>=QRows>=0. Output parameters: Q - first QRows rows of matrix Q. Array whose indexes range within [0..QRows-1, 0..N-1]. If QRows=0, the array remains unchanged. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixlqunpackq(const ap::real_2d_array& a, int m, int n, const ap::real_1d_array& tau, int qrows, ap::real_2d_array& q);
rmatrixqr
function/************************************************************************* QR decomposition of a rectangular matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and R in compact form (see below). Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0.. Min(M-1,N-1)]. Matrix A is represented as A = QR, where Q is an orthogonal matrix of size MxM, R - upper triangular (or upper trapezoid) matrix of size M x N. The elements of matrix R are located on and above the main diagonal of matrix A. The elements which are located in Tau array and below the main diagonal of matrix A are used to form matrix Q as follows: Matrix Q is represented as a product of elementary reflections Q = H(0)*H(2)*...*H(k-1), where k = min(m,n), and each H(i) is in the form H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - real vector, so that v(0:i-1) = 0, v(i) = 1, v(i+1:m-1) stored in A(i+1:m-1,i). -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixqr(ap::real_2d_array& a, int m, int n, ap::real_1d_array& tau);
rmatrixqrunpackq
function/************************************************************************* Partial unpacking of matrix Q from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of RMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of the RMatrixQR subroutine. QColumns - required number of columns of matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array whose indexes range within [0..M-1, 0..QColumns-1]. If QColumns=0, the array remains unchanged. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixqrunpackq(const ap::real_2d_array& a, int m, int n, const ap::real_1d_array& tau, int qcolumns, ap::real_2d_array& q);
rmatrixqrunpackr
function/************************************************************************* Unpacking of matrix R from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of RMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: R - matrix R, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/void rmatrixqrunpackr(const ap::real_2d_array& a, int m, int n, ap::real_2d_array& r);
smatrixtd
function/************************************************************************* Reduction of a symmetric matrix which is given by its higher or lower triangular part to a tridiagonal matrix using orthogonal similarity transformation: Q'*A*Q=T. Input parameters: A - matrix to be transformed array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then matrix A is given by its upper triangle, and the lower triangle is not used and not modified by the algorithm, and vice versa if IsUpper = False. Output parameters: A - matrices T and Q in compact form (see lower) Tau - array of factors which are forming matrices H(i) array with elements [0..N-2]. D - main diagonal of symmetric matrix T. array with elements [0..N-1]. E - secondary diagonal of symmetric matrix T. array with elements [0..N-2]. If IsUpper=True, the matrix Q is represented as a product of elementary reflectors Q = H(n-2) . . . H(2) H(0). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n-1) = 0, v(i) = 1, v(0:i-1) is stored on exit in A(0:i-1,i+1), and tau in TAU(i). If IsUpper=False, the matrix Q is represented as a product of elementary reflectors Q = H(0) H(2) . . . H(n-2). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) is stored on exit in A(i+2:n-1,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = 'U': if UPLO = 'L': ( d e v1 v2 v3 ) ( d ) ( d e v2 v3 ) ( e d ) ( d e v3 ) ( v0 e d ) ( d e ) ( v0 v1 e d ) ( d ) ( v0 v1 v2 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/void smatrixtd(ap::real_2d_array& a, int n, bool isupper, ap::real_1d_array& tau, ap::real_1d_array& d, ap::real_1d_array& e);
smatrixtdunpackq
function/************************************************************************* Unpacking matrix Q which reduces symmetric matrix to a tridiagonal form. Input parameters: A - the result of a SMatrixTD subroutine N - size of matrix A. IsUpper - storage format (a parameter of SMatrixTD subroutine) Tau - the result of a SMatrixTD subroutine Output parameters: Q - transformation matrix. array with elements [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/void smatrixtdunpackq(const ap::real_2d_array& a, const int& n, const bool& isupper, const ap::real_1d_array& tau, ap::real_2d_array& q);
pca
unitpcabuildbasis
function/************************************************************************* Principal components analysis Subroutine builds orthogonal basis where first axis corresponds to direction with maximum variance, second axis maximizes variance in subspace orthogonal to first axis and so on. It should be noted that, unlike LDA, PCA does not use class labels. INPUT PARAMETERS: X - dataset, array[0..NPoints-1,0..NVars-1]. matrix contains ONLY INDEPENDENT VARIABLES. NPoints - dataset size, NPoints>=0 NVars - number of independent variables, NVars>=1 ÂÛÕÎÄÍÛÅ ÏÀÐÀÌÅÒÐÛ: Info - return code: * -4, if SVD subroutine haven't converged * -1, if wrong parameters has been passed (NPoints<0, NVars<1) * 1, if task is solved S2 - array[0..NVars-1]. variance values corresponding to basis vectors. V - array[0..NVars-1,0..NVars-1] matrix, whose columns store basis vectors. -- ALGLIB -- Copyright 25.08.2008 by Bochkanov Sergey *************************************************************************/void pcabuildbasis(const ap::real_2d_array& x, int npoints, int nvars, int& info, ap::real_1d_array& s2, ap::real_2d_array& v);
poissondistr
unitinvpoissondistribution
function/************************************************************************* Inverse Poisson distribution Finds the Poisson variable x such that the integral from 0 to x of the Poisson density is equal to the given probability y. This is accomplished using the inverse gamma integral function and the relation m = igami( k+1, y ). ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double invpoissondistribution(int k, double y);
poissoncdistribution
function/************************************************************************* Complemented Poisson distribution Returns the sum of the terms k+1 to infinity of the Poisson distribution: inf. j -- -m m > e -- -- j! j=k+1 The terms are not summed directly; instead the incomplete gamma integral is employed, according to the formula y = pdtrc( k, m ) = igam( k+1, m ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double poissoncdistribution(int k, double m);
poissondistribution
function/************************************************************************* Poisson distribution Returns the sum of the first k+1 terms of the Poisson distribution: k j -- -m m > e -- -- j! j=0 The terms are not summed directly; instead the incomplete gamma integral is employed, according to the relation y = pdtr( k, m ) = igamc( k+1, m ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double poissondistribution(int k, double m);
polint
unitpolynomialfitreport
structure/************************************************************************* Polynomial fitting report: TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/struct polynomialfitreport { double taskrcond; double rmserror; double avgerror; double avgrelerror; double maxerror; };
polynomialbuild
function/************************************************************************* Lagrange intepolant: generation of the model on the general grid. This function has O(N^2) complexity. INPUT PARAMETERS: X - abscissas, array[0..N-1] Y - function values, array[0..N-1] N - number of points, N>=1 OIYTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void polynomialbuild(const ap::real_1d_array& x, const ap::real_1d_array& y, int n, barycentricinterpolant& p);
Examples: polint_gen
polynomialbuildcheb1
function/************************************************************************* Lagrange intepolant on Chebyshev grid (first kind). This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1], Y[I] = Y(0.5*(B+A) + 0.5*(B-A)*Cos(PI*(2*i+1)/(2*n))) N - number of points, N>=1 for N=1 a constant model is constructed. OIYTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/void polynomialbuildcheb1(double a, double b, const ap::real_1d_array& y, int n, barycentricinterpolant& p);
Examples: polint_cheb1
polynomialbuildcheb2
function/************************************************************************* Lagrange intepolant on Chebyshev grid (second kind). This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1], Y[I] = Y(0.5*(B+A) + 0.5*(B-A)*Cos(PI*i/(n-1))) N - number of points, N>=1 for N=1 a constant model is constructed. OIYTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/void polynomialbuildcheb2(double a, double b, const ap::real_1d_array& y, int n, barycentricinterpolant& p);
Examples: polint_cheb2
polynomialbuildeqdist
function/************************************************************************* Lagrange intepolant: generation of the model on equidistant grid. This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1] N - number of points, N>=1 for N=1 a constant model is constructed. OIYTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/void polynomialbuildeqdist(double a, double b, const ap::real_1d_array& y, int n, barycentricinterpolant& p);
Examples: polint_eqdist
polynomialcalccheb1
function/************************************************************************* Fast polynomial interpolation function on Chebyshev points (first kind) with O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on Chebyshev grid (first kind), X[i] = 0.5*(B+A) + 0.5*(B-A)*Cos(PI*(2*i+1)/(2*n)) for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolIntBuildCheb1()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double polynomialcalccheb1(double a, double b, const ap::real_1d_array& f, int n, double t);
polynomialcalccheb2
function/************************************************************************* Fast polynomial interpolation function on Chebyshev points (second kind) with O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on Chebyshev grid (second kind), X[i] = 0.5*(B+A) + 0.5*(B-A)*Cos(PI*i/(n-1)) for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolIntBuildCheb2()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double polynomialcalccheb2(double a, double b, const ap::real_1d_array& f, int n, double t);
polynomialcalceqdist
function/************************************************************************* Fast equidistant polynomial interpolation function with O(N) complexity INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on equidistant grid, N>=1 for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolynomialBuildEqDist()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double polynomialcalceqdist(double a, double b, const ap::real_1d_array& f, int n, double t);
polynomialfit
function/************************************************************************* Least squares fitting by polynomial. This subroutine is "lightweight" alternative for more complex and feature- rich PolynomialFitWC(). See PolynomialFitWC() for more information about subroutine parameters (we don't duplicate it here because of length) -- ALGLIB PROJECT -- Copyright 12.10.2009 by Bochkanov Sergey *************************************************************************/void polynomialfit(const ap::real_1d_array& x, const ap::real_1d_array& y, int n, int m, int& info, barycentricinterpolant& p, polynomialfitreport& rep);
Examples: polint_fit
polynomialfitwc
function/************************************************************************* Weighted fitting by Chebyshev polynomial in barycentric form, with constraints on function values or first derivatives. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO: PolynomialFit() INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where polynomial values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that P(XC[i])=YC[i] * DC[i]=1 means that P'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -1 means another errors in parameters passed (N<=0, for example) P - interpolant in barycentric form. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * even simple constraints can be inconsistent, see Wikipedia article on this subject: http://en.wikipedia.org/wiki/Birkhoff_interpolation * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the one special cases, however, we can guarantee consistency. This case is: M>1 and constraints on the function values (NOT DERIVATIVES) Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/void polynomialfitwc(ap::real_1d_array x, ap::real_1d_array y, const ap::real_1d_array& w, int n, ap::real_1d_array xc, ap::real_1d_array yc, const ap::integer_1d_array& dc, int k, int m, int& info, barycentricinterpolant& p, polynomialfitreport& rep);
Examples: polint_fit
ap::real_1d_array y; int n; int i; double t; barycentricinterpolant p; double v; double dv; double d2v; double err; double maxerr; // // Demonstration // printf("POLYNOMIAL INTERPOLATION\n\n"); printf("F(x)=sin(x), [0, pi]\n"); printf("Second degree polynomial is used\n\n"); // // Create polynomial interpolant // n = 3; y.setlength(n); for(i = 0; i <= n-1; i++) { y(i) = sin(0.5*ap::pi()*(1.0+cos(ap::pi()*(2*i+1)/(2*n)))); } polynomialbuildcheb1(double(0), ap::pi(), y, n, p); // // Output results // barycentricdiff2(p, double(0), v, dv, d2v); printf(" P(x) F(x) \n"); printf("function %6.3lf %6.3lf \n", double(barycentriccalc(p, double(0))), double(0)); printf("d/dx(0) %6.3lf %6.3lf \n", double(dv), double(1)); printf("d2/dx2(0) %6.3lf %6.3lf \n", double(d2v), double(0)); printf("\n\n");
ap::real_1d_array y; int n; int i; double t; barycentricinterpolant p; double v; double dv; double d2v; double err; double maxerr; // // Demonstration // printf("POLYNOMIAL INTERPOLATION\n\n"); printf("F(x)=sin(x), [0, pi]\n"); printf("Second degree polynomial is used\n\n"); // // Create polynomial interpolant // n = 3; y.setlength(n); for(i = 0; i <= n-1; i++) { y(i) = sin(0.5*ap::pi()*(1.0+cos(ap::pi()*i/(n-1)))); } polynomialbuildcheb2(double(0), ap::pi(), y, n, p); // // Output results // barycentricdiff2(p, double(0), v, dv, d2v); printf(" P(x) F(x) \n"); printf("function %6.3lf %6.3lf \n", double(barycentriccalc(p, double(0))), double(0)); printf("d/dx(0) %6.3lf %6.3lf \n", double(dv), double(1)); printf("d2/dx2(0) %6.3lf %6.3lf \n", double(d2v), double(0)); printf("\n\n");
ap::real_1d_array y; int n; int i; double t; barycentricinterpolant p; double v; double dv; double d2v; double err; double maxerr; // // Demonstration // printf("POLYNOMIAL INTERPOLATION\n\n"); printf("F(x)=sin(x), [0, pi]\n"); printf("Second degree polynomial is used\n\n"); // // Create polynomial interpolant // n = 3; y.setlength(n); for(i = 0; i <= n-1; i++) { y(i) = sin(ap::pi()*i/(n-1)); } polynomialbuildeqdist(double(0), ap::pi(), y, n, p); // // Output results // barycentricdiff2(p, double(0), v, dv, d2v); printf(" P(x) F(x) \n"); printf("function %6.3lf %6.3lf \n", double(barycentriccalc(p, double(0))), double(0)); printf("d/dx(0) %6.3lf %6.3lf \n", double(dv), double(1)); printf("d2/dx2(0) %6.3lf %6.3lf \n", double(d2v), double(0)); printf("\n\n");
int m; int n; ap::real_1d_array x; ap::real_1d_array y; ap::real_1d_array w; ap::real_1d_array xc; ap::real_1d_array yc; ap::integer_1d_array dc; polynomialfitreport rep; int info; barycentricinterpolant p; int i; int j; double a; double b; double v; double dv; printf("\n\nFitting exp(2*x) at [-1,+1] by polinomial\n\n"); printf("Fit type rms.err max.err p(0) dp(0)\n"); // // Prepare points // m = 5; a = -1; b = +1; n = 1000; x.setlength(n); y.setlength(n); w.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = a+(b-a)*i/(n-1); y(i) = exp(2*x(i)); w(i) = 1.0; } // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5th degree polynomial // c) without constraints // polynomialfit(x, y, n, m, info, p, rep); barycentricdiff1(p, 0.0, v, dv); printf("Unconstrained %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv)); // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5th degree polynomial // c) constrained: p(0)=1 // xc.setlength(1); yc.setlength(1); dc.setlength(1); xc(0) = 0; yc(0) = 1; dc(0) = 0; polynomialfitwc(x, y, w, n, xc, yc, dc, 1, m, info, p, rep); barycentricdiff1(p, 0.0, v, dv); printf("Constrained, p(0)=1 %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv)); // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5th degree polynomial // c) constrained: dp(0)=2 // xc.setlength(1); yc.setlength(1); dc.setlength(1); xc(0) = 0; yc(0) = 2; dc(0) = 1; polynomialfitwc(x, y, w, n, xc, yc, dc, 1, m, info, p, rep); barycentricdiff1(p, 0.0, v, dv); printf("Constrained, dp(0)=2 %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv)); // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5th degree polynomial // c) constrained: p(0)=1, dp(0)=2 // xc.setlength(2); yc.setlength(2); dc.setlength(2); xc(0) = 0; yc(0) = 1; dc(0) = 0; xc(1) = 0; yc(1) = 2; dc(1) = 1; polynomialfitwc(x, y, w, n, xc, yc, dc, 2, m, info, p, rep); barycentricdiff1(p, 0.0, v, dv); printf("Constrained, both %7.4lf %7.4lf %7.4lf %7.4lf\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv)); printf("\n\n");
ap::real_1d_array x; ap::real_1d_array y; int n; int i; double t; barycentricinterpolant p; double v; double dv; double d2v; double err; double maxerr; // // Demonstration // printf("POLYNOMIAL INTERPOLATION\n\n"); printf("F(x)=sin(x), [0, pi]\n"); printf("Second degree polynomial is used\n\n"); // // Create polynomial interpolant // n = 3; x.setlength(n); y.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); } polynomialbuild(x, y, n, p); // // Output results // barycentricdiff2(p, double(0), v, dv, d2v); printf(" P(x) F(x) \n"); printf("function %6.3lf %6.3lf \n", double(barycentriccalc(p, double(0))), double(0)); printf("d/dx(0) %6.3lf %6.3lf \n", double(dv), double(1)); printf("d2/dx2(0) %6.3lf %6.3lf \n", double(d2v), double(0)); printf("\n\n");
psif
unitpsi
function/************************************************************************* Psi (digamma) function d - psi(x) = -- ln | (x) dx is the logarithmic derivative of the gamma function. For integer x, n-1 - psi(n) = -EUL + > 1/k. - k=1 This formula is used for 0 < n <= 10. If x is negative, it is transformed to a positive argument by the reflection formula psi(1-x) = psi(x) + pi cot(pi x). For general positive x, the argument is made greater than 10 using the recurrence psi(x+1) = psi(x) + 1/x. Then the following asymptotic expansion is applied: inf. B - 2k psi(x) = log(x) - 1/2x - > ------- - 2k k=1 2k x where the B2k are Bernoulli numbers. ACCURACY: Relative error (except absolute when |psi| < 1): arithmetic domain # trials peak rms IEEE 0,30 30000 1.3e-15 1.4e-16 IEEE -30,0 40000 1.5e-15 2.2e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1992, 2000 by Stephen L. Moshier *************************************************************************/double psi(double x);
pspline
unitpspline2interpolant
structure/************************************************************************* Parametric spline inteprolant: 2-dimensional curve. You should not try to access its members directly - use PSpline2XXXXXXXX() functions instead. *************************************************************************/struct pspline2interpolant { int n; bool periodic; ap::real_1d_array p; spline1dinterpolant x; spline1dinterpolant y; };
pspline3interpolant
structure/************************************************************************* Parametric spline inteprolant: 3-dimensional curve. You should not try to access its members directly - use PSpline3XXXXXXXX() functions instead. *************************************************************************/struct pspline3interpolant { int n; bool periodic; ap::real_1d_array p; spline1dinterpolant x; spline1dinterpolant y; spline1dinterpolant z; };
pspline2arclength
function/************************************************************************* This function calculates arc length, i.e. length of curve between t=a and t=b. INPUT PARAMETERS: P - parametric spline interpolant A,B - parameter values corresponding to arc ends: * B>A will result in positive length returned * B<A will result in negative length returned RESULT: length of arc starting at T=A and ending at T=B. -- ALGLIB PROJECT -- Copyright 30.05.2010 by Bochkanov Sergey *************************************************************************/double pspline2arclength(const pspline2interpolant& p, double a, double b);
pspline2build
function/************************************************************************* This function builds non-periodic 2-dimensional parametric spline which starts at (X[0],Y[0]) and ends at (X[N-1],Y[N-1]). INPUT PARAMETERS: XY - points, array[0..N-1,0..1]. XY[I,0:1] corresponds to the Ith point. Order of points is important! N - points count, N>=5 for Akima splines, N>=2 for other types of splines. ST - spline type: * 0 Akima spline * 1 parabolically terminated Catmull-Rom spline (Tension=0) * 2 parabolically terminated cubic spline PT - parameterization type: * 0 uniform * 1 chord length * 2 centripetal OUTPUT PARAMETERS: P - parametric spline interpolant NOTES: * this function assumes that there all consequent points are distinct. I.e. (x0,y0)<>(x1,y1), (x1,y1)<>(x2,y2), (x2,y2)<>(x3,y3) and so on. However, non-consequent points may coincide, i.e. we can have (x0,y0)= =(x2,y2). -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2build(ap::real_2d_array xy, int n, int st, int pt, pspline2interpolant& p);
pspline2buildperiodic
function/************************************************************************* This function builds periodic 2-dimensional parametric spline which starts at (X[0],Y[0]), goes through all points to (X[N-1],Y[N-1]) and then back to (X[0],Y[0]). INPUT PARAMETERS: XY - points, array[0..N-1,0..1]. XY[I,0:1] corresponds to the Ith point. XY[N-1,0:1] must be different from XY[0,0:1]. Order of points is important! N - points count, N>=3 for other types of splines. ST - spline type: * 1 Catmull-Rom spline (Tension=0) with cyclic boundary conditions * 2 cubic spline with cyclic boundary conditions PT - parameterization type: * 0 uniform * 1 chord length * 2 centripetal OUTPUT PARAMETERS: P - parametric spline interpolant NOTES: * this function assumes that there all consequent points are distinct. I.e. (x0,y0)<>(x1,y1), (x1,y1)<>(x2,y2), (x2,y2)<>(x3,y3) and so on. However, non-consequent points may coincide, i.e. we can have (x0,y0)= =(x2,y2). * last point of sequence is NOT equal to the first point. You shouldn't make curve "explicitly periodic" by making them equal. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2buildperiodic(ap::real_2d_array xy, int n, int st, int pt, pspline2interpolant& p);
pspline2calc
function/************************************************************************* This function calculates the value of the parametric spline for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-position Y - Y-position -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2calc(const pspline2interpolant& p, double t, double& x, double& y);
pspline2diff
function/************************************************************************* This function calculates derivative, i.e. it returns (dX/dT,dY/dT). INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - X-derivative Y - Y-value DY - Y-derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2diff(const pspline2interpolant& p, double t, double& x, double& dx, double& y, double& dy);
pspline2diff2
function/************************************************************************* This function calculates first and second derivative with respect to T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - derivative D2X - second derivative Y - Y-value DY - derivative D2Y - second derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2diff2(const pspline2interpolant& p, double t, double& x, double& dx, double& d2x, double& y, double& dy, double& d2y);
pspline2parametervalues
function/************************************************************************* This function returns vector of parameter values correspoding to points. I.e. for P created from (X[0],Y[0])...(X[N-1],Y[N-1]) and U=TValues(P) we have (X[0],Y[0]) = PSpline2Calc(P,U[0]), (X[1],Y[1]) = PSpline2Calc(P,U[1]), (X[2],Y[2]) = PSpline2Calc(P,U[2]), ... INPUT PARAMETERS: P - parametric spline interpolant OUTPUT PARAMETERS: N - array size T - array[0..N-1] NOTES: * for non-periodic splines U[0]=0, U[0]<U[1]<...<U[N-1], U[N-1]=1 * for periodic splines U[0]=0, U[0]<U[1]<...<U[N-1], U[N-1]<1 -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2parametervalues(const pspline2interpolant& p, int& n, ap::real_1d_array& t);
pspline2tangent
function/************************************************************************* This function calculates tangent vector for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-component of tangent vector (normalized) Y - Y-component of tangent vector (normalized) NOTE: X^2+Y^2 is either 1 (for non-zero tangent vector) or 0. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline2tangent(const pspline2interpolant& p, double t, double& x, double& y);
pspline3arclength
function/************************************************************************* This function calculates arc length, i.e. length of curve between t=a and t=b. INPUT PARAMETERS: P - parametric spline interpolant A,B - parameter values corresponding to arc ends: * B>A will result in positive length returned * B<A will result in negative length returned RESULT: length of arc starting at T=A and ending at T=B. -- ALGLIB PROJECT -- Copyright 30.05.2010 by Bochkanov Sergey *************************************************************************/double pspline3arclength(const pspline3interpolant& p, double a, double b);
pspline3build
function/************************************************************************* This function builds non-periodic 3-dimensional parametric spline which starts at (X[0],Y[0],Z[0]) and ends at (X[N-1],Y[N-1],Z[N-1]). Same as PSpline2Build() function, but for 3D, so we won't duplicate its description here. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3build(ap::real_2d_array xy, int n, int st, int pt, pspline3interpolant& p);
pspline3buildperiodic
function/************************************************************************* This function builds periodic 3-dimensional parametric spline which starts at (X[0],Y[0],Z[0]), goes through all points to (X[N-1],Y[N-1],Z[N-1]) and then back to (X[0],Y[0],Z[0]). Same as PSpline2Build() function, but for 3D, so we won't duplicate its description here. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3buildperiodic(ap::real_2d_array xy, int n, int st, int pt, pspline3interpolant& p);
pspline3calc
function/************************************************************************* This function calculates the value of the parametric spline for a given value of parameter T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-position Y - Y-position Z - Z-position -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3calc(const pspline3interpolant& p, double t, double& x, double& y, double& z);
pspline3diff
function/************************************************************************* This function calculates derivative, i.e. it returns (dX/dT,dY/dT,dZ/dT). INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - X-derivative Y - Y-value DY - Y-derivative Z - Z-value DZ - Z-derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3diff(const pspline3interpolant& p, double t, double& x, double& dx, double& y, double& dy, double& z, double& dz);
pspline3diff2
function/************************************************************************* This function calculates first and second derivative with respect to T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - derivative D2X - second derivative Y - Y-value DY - derivative D2Y - second derivative Z - Z-value DZ - derivative D2Z - second derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3diff2(const pspline3interpolant& p, double t, double& x, double& dx, double& d2x, double& y, double& dy, double& d2y, double& z, double& dz, double& d2z);
pspline3parametervalues
function/************************************************************************* This function returns vector of parameter values correspoding to points. Same as PSpline2ParameterValues(), but for 3D. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3parametervalues(const pspline3interpolant& p, int& n, ap::real_1d_array& t);
pspline3tangent
function/************************************************************************* This function calculates tangent vector for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-component of tangent vector (normalized) Y - Y-component of tangent vector (normalized) Z - Z-component of tangent vector (normalized) NOTE: X^2+Y^2+Z^2 is either 1 (for non-zero tangent vector) or 0. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/void pspline3tangent(const pspline3interpolant& p, double t, double& x, double& y, double& z);
ratint
unitbarycentricfitreport
structure/************************************************************************* Barycentric fitting report: TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/struct barycentricfitreport { double taskrcond; int dbest; double rmserror; double avgerror; double avgrelerror; double maxerror; };
barycentricinterpolant
structure/************************************************************************* Barycentric interpolant. *************************************************************************/struct barycentricinterpolant { int n; double sy; ap::real_1d_array x; ap::real_1d_array y; ap::real_1d_array w; };
barycentricbuildfloaterhormann
function/************************************************************************* Rational interpolant without poles The subroutine constructs the rational interpolating function without real poles (see 'Barycentric rational interpolation with no poles and high rates of approximation', Michael S. Floater. and Kai Hormann, for more information on this subject). Input parameters: X - interpolation nodes, array[0..N-1]. Y - function values, array[0..N-1]. N - number of nodes, N>0. D - order of the interpolation scheme, 0 <= D <= N-1. D<0 will cause an error. D>=N it will be replaced with D=N-1. if you don't know what D to choose, use small value about 3-5. Output parameters: B - barycentric interpolant. Note: this algorithm always succeeds and calculates the weights with close to machine precision. -- ALGLIB PROJECT -- Copyright 17.06.2007 by Bochkanov Sergey *************************************************************************/void barycentricbuildfloaterhormann(const ap::real_1d_array& x, const ap::real_1d_array& y, int n, int d, barycentricinterpolant& b);
barycentricbuildxyw
function/************************************************************************* Rational interpolant from X/Y/W arrays F(t) = SUM(i=0,n-1,w[i]*f[i]/(t-x[i])) / SUM(i=0,n-1,w[i]/(t-x[i])) INPUT PARAMETERS: X - interpolation nodes, array[0..N-1] F - function values, array[0..N-1] W - barycentric weights, array[0..N-1] N - nodes count, N>0 OUTPUT PARAMETERS: B - barycentric interpolant built from (X, Y, W) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricbuildxyw(const ap::real_1d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& w, int n, barycentricinterpolant& b);
barycentriccalc
function/************************************************************************* Rational interpolation using barycentric formula F(t) = SUM(i=0,n-1,w[i]*f[i]/(t-x[i])) / SUM(i=0,n-1,w[i]/(t-x[i])) Input parameters: B - barycentric interpolant built with one of model building subroutines. T - interpolation point Result: barycentric interpolant F(t) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/double barycentriccalc(const barycentricinterpolant& b, double t);
barycentriccopy
function/************************************************************************* Copying of the barycentric interpolant INPUT PARAMETERS: B - barycentric interpolant OUTPUT PARAMETERS: B2 - copy(B1) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentriccopy(const barycentricinterpolant& b, barycentricinterpolant& b2);
barycentricdiff1
function/************************************************************************* Differentiation of barycentric interpolant: first derivative. Algorithm used in this subroutine is very robust and should not fail until provided with values too close to MaxRealNumber (usually MaxRealNumber/N or greater will overflow). INPUT PARAMETERS: B - barycentric interpolant built with one of model building subroutines. T - interpolation point OUTPUT PARAMETERS: F - barycentric interpolant at T DF - first derivative NOTE -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricdiff1(const barycentricinterpolant& b, double t, double& f, double& df);
barycentricdiff2
function/************************************************************************* Differentiation of barycentric interpolant: first/second derivatives. INPUT PARAMETERS: B - barycentric interpolant built with one of model building subroutines. T - interpolation point OUTPUT PARAMETERS: F - barycentric interpolant at T DF - first derivative D2F - second derivative NOTE: this algorithm may fail due to overflow/underflor if used on data whose values are close to MaxRealNumber or MinRealNumber. Use more robust BarycentricDiff1() subroutine in such cases. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricdiff2(const barycentricinterpolant& b, double t, double& f, double& df, double& d2f);
barycentricfitfloaterhormann
function/************************************************************************* Rational least squares fitting, without weights and constraints. See BarycentricFitFloaterHormannWC() for more information. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricfitfloaterhormann(const ap::real_1d_array& x, const ap::real_1d_array& y, int n, int m, int& info, barycentricinterpolant& b, barycentricfitreport& rep);
Examples: ratint_fit
barycentricfitfloaterhormannwc
function/************************************************************************* Weghted rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9], with constraints and individual weights. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least WEIGHTED root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). SEE ALSO * BarycentricFitFloaterHormann(), "lightweight" fitting without invididual weights and constraints. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where function values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -1 means another errors in parameters passed (N<=0, for example) B - barycentric interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained barycentric interpolants: * excessive constraints can be inconsistent. Floater-Hormann basis functions aren't as flexible as splines (although they are very smooth). * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function VALUES at the interval boundaries. Note that consustency of the constraints on the function DERIVATIVES is NOT guaranteed (you can use in such cases cubic splines which are more flexible). * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricfitfloaterhormannwc(const ap::real_1d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& w, int n, const ap::real_1d_array& xc, const ap::real_1d_array& yc, const ap::integer_1d_array& dc, int k, int m, int& info, barycentricinterpolant& b, barycentricfitreport& rep);
Examples: ratint_fit
barycentriclintransx
function/************************************************************************* This subroutine performs linear transformation of the argument. INPUT PARAMETERS: B - rational interpolant in barycentric form CA, CB - transformation coefficients: x = CA*t + CB OUTPUT PARAMETERS: B - transformed interpolant with X replaced by T -- ALGLIB PROJECT -- Copyright 19.08.2009 by Bochkanov Sergey *************************************************************************/void barycentriclintransx(barycentricinterpolant& b, double ca, double cb);
barycentriclintransy
function/************************************************************************* This subroutine performs linear transformation of the barycentric interpolant. INPUT PARAMETERS: B - rational interpolant in barycentric form CA, CB - transformation coefficients: B2(x) = CA*B(x) + CB OUTPUT PARAMETERS: B - transformed interpolant -- ALGLIB PROJECT -- Copyright 19.08.2009 by Bochkanov Sergey *************************************************************************/void barycentriclintransy(barycentricinterpolant& b, double ca, double cb);
barycentricserialize
function/************************************************************************* Serialization of the barycentric interpolant INPUT PARAMETERS: B - barycentric interpolant OUTPUT PARAMETERS: RA - array of real numbers which contains interpolant, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricserialize(const barycentricinterpolant& b, ap::real_1d_array& ra, int& ralen);
barycentricunpack
function/************************************************************************* Extracts X/Y/W arrays from rational interpolant INPUT PARAMETERS: B - barycentric interpolant OUTPUT PARAMETERS: N - nodes count, N>0 X - interpolation nodes, array[0..N-1] F - function values, array[0..N-1] W - barycentric weights, array[0..N-1] -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricunpack(const barycentricinterpolant& b, int& n, ap::real_1d_array& x, ap::real_1d_array& y, ap::real_1d_array& w);
barycentricunserialize
function/************************************************************************* Unserialization of the barycentric interpolant INPUT PARAMETERS: RA - array of real numbers which contains interpolant, OUTPUT PARAMETERS: B - barycentric interpolant -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void barycentricunserialize(const ap::real_1d_array& ra, barycentricinterpolant& b);
int m; int n; int d; ap::real_1d_array x; ap::real_1d_array y; ap::real_1d_array w; ap::real_1d_array xc; ap::real_1d_array yc; ap::integer_1d_array dc; barycentricfitreport rep; int info; barycentricinterpolant r; int i; int j; double a; double b; double v; double dv; printf("\n\nFitting exp(2*x) at [-1,+1] by:\n1. constrained/unconstrained Floater-Hormann functions\n"); printf("\n"); printf("Fit type rms.err max.err p(0) dp(0) DBest\n"); // // Prepare points // m = 5; a = -1; b = +1; n = 10000; x.setlength(n); y.setlength(n); w.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = a+(b-a)*i/(n-1); y(i) = exp(2*x(i)); w(i) = 1.0; } // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5 Floater-Hormann functions // c) without constraints // barycentricfitfloaterhormann(x, y, n, m, info, r, rep); barycentricdiff1(r, 0.0, v, dv); printf("Unconstrained FH %7.4lf %7.4lf %7.4lf %7.4lf %0ld\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv), long(rep.dbest)); // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5 Floater-Hormann functions // c) constrained: p(0)=1 // xc.setlength(1); yc.setlength(1); dc.setlength(1); xc(0) = 0; yc(0) = 1; dc(0) = 0; barycentricfitfloaterhormannwc(x, y, w, n, xc, yc, dc, 1, m, info, r, rep); barycentricdiff1(r, 0.0, v, dv); printf("Constrained FH, p(0)=1 %7.4lf %7.4lf %7.4lf %7.4lf %0ld\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv), long(rep.dbest)); // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5 Floater-Hormann functions // c) constrained: dp(0)=2 // xc.setlength(1); yc.setlength(1); dc.setlength(1); xc(0) = 0; yc(0) = 2; dc(0) = 1; barycentricfitfloaterhormannwc(x, y, w, n, xc, yc, dc, 1, m, info, r, rep); barycentricdiff1(r, 0.0, v, dv); printf("Constrained FH, dp(0)=2 %7.4lf %7.4lf %7.4lf %7.4lf %0ld\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv), long(rep.dbest)); // // Fitting: // a) f(x)=exp(2*x) at [-1,+1] // b) by 5 Floater-Hormann functions // c) constrained: p(0)=1, dp(0)=2 // xc.setlength(2); yc.setlength(2); dc.setlength(2); xc(0) = 0; yc(0) = 1; dc(0) = 0; xc(1) = 0; yc(1) = 2; dc(1) = 1; barycentricfitfloaterhormannwc(x, y, w, n, xc, yc, dc, 2, m, info, r, rep); barycentricdiff1(r, 0.0, v, dv); printf("Constrained FH, both %7.4lf %7.4lf %7.4lf %7.4lf %0ld\n", double(rep.rmserror), double(rep.maxerror), double(v), double(dv), long(rep.dbest)); printf("\n\n");
rcond
unitcmatrixlurcond1
function/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the CMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double cmatrixlurcond1(const ap::complex_2d_array& lua, int n);
Examples: rcond_1
cmatrixlurcondinf
function/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (infinity norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the CMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double cmatrixlurcondinf(const ap::complex_2d_array& lua, int n);
Examples: rcond_1
cmatrixrcond1
function/************************************************************************* Estimate of a matrix condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double cmatrixrcond1(ap::complex_2d_array a, int n);
Examples: rcond_1
cmatrixrcondinf
function/************************************************************************* Estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double cmatrixrcondinf(ap::complex_2d_array a, int n);
Examples: rcond_1
cmatrixtrrcond1
function/************************************************************************* Triangular matrix: estimate of a condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double cmatrixtrrcond1(const ap::complex_2d_array& a, int n, bool isupper, bool isunit);
cmatrixtrrcondinf
function/************************************************************************* Triangular matrix: estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double cmatrixtrrcondinf(const ap::complex_2d_array& a, int n, bool isupper, bool isunit);
hpdmatrixcholeskyrcond
function/************************************************************************* Condition number estimate of a Hermitian positive definite matrix given by Cholesky decomposition. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: CD - Cholesky decomposition of matrix A, output of SMatrixCholesky subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double hpdmatrixcholeskyrcond(const ap::complex_2d_array& a, int n, bool isupper);
Examples: rcond_1
hpdmatrixrcond
function/************************************************************************* Condition number estimate of a Hermitian positive definite matrix. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm of condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - Hermitian positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)), if matrix A is positive definite, -1, if matrix A is not positive definite, and its condition number could not be found by this algorithm. NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double hpdmatrixrcond(ap::complex_2d_array a, int n, bool isupper);
Examples: rcond_1
rcondthreshold
function/************************************************************************* Threshold for rcond: matrices with condition number beyond this threshold are considered singular. Threshold must be far enough from underflow, at least Sqr(Threshold) must be greater than underflow. *************************************************************************/double rcondthreshold();
rmatrixlurcond1
function/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the RMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double rmatrixlurcond1(const ap::real_2d_array& lua, int n);
Examples: rcond_1
rmatrixlurcondinf
function/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (infinity norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the RMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double rmatrixlurcondinf(const ap::real_2d_array& lua, int n);
Examples: rcond_1
rmatrixrcond1
function/************************************************************************* Estimate of a matrix condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double rmatrixrcond1(ap::real_2d_array a, int n);
Examples: rcond_1
rmatrixrcondinf
function/************************************************************************* Estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double rmatrixrcondinf(ap::real_2d_array a, int n);
Examples: rcond_1
rmatrixtrrcond1
function/************************************************************************* Triangular matrix: estimate of a condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double rmatrixtrrcond1(const ap::real_2d_array& a, int n, bool isupper, bool isunit);
rmatrixtrrcondinf
function/************************************************************************* Triangular matrix: estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double rmatrixtrrcondinf(const ap::real_2d_array& a, int n, bool isupper, bool isunit);
spdmatrixcholeskyrcond
function/************************************************************************* Condition number estimate of a symmetric positive definite matrix given by Cholesky decomposition. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: CD - Cholesky decomposition of matrix A, output of SMatrixCholesky subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double spdmatrixcholeskyrcond(const ap::real_2d_array& a, int n, bool isupper);
Examples: rcond_1
spdmatrixrcond
function/************************************************************************* Condition number estimate of a symmetric positive definite matrix. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm of condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - symmetric positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)), if matrix A is positive definite, -1, if matrix A is not positive definite, and its condition number could not be found by this algorithm. NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/double spdmatrixrcond(ap::real_2d_array a, int n, bool isupper);
Examples: rcond_1
int n; int i; int j; double c1; double x; ap::real_2d_array a; printf(" CONDITION NUMBERS\n"); printf("OF VANDERMONDE AND CHEBYSHEV INTERPOLATION MATRICES\n\n"); printf(" VANDERMONDE CHEBYSHEV\n"); printf(" N 1-norm 1-norm\n"); for(n = 2; n <= 14; n++) { a.setlength(n, n); printf("%3ld", long(n)); // // Vandermone matrix // for(i = 0; i <= n-1; i++) { x = double(2*i)/double(n-1)-1; a(i,0) = 1; for(j = 1; j <= n-1; j++) { a(i,j) = a(i,j-1)*x; } } c1 = 1/rmatrixrcond1(a, n); printf(" %11.1lf", double(c1)); // // Chebyshev interpolation matrix // for(i = 0; i <= n-1; i++) { x = double(2*i)/double(n-1)-1; a(i,0) = 1; if( n>=2 ) { a(i,1) = x; } for(j = 2; j <= n-1; j++) { a(i,j) = 2*x*a(i,j-1)-a(i,j-2); } } c1 = 1/rmatrixrcond1(a, n); printf(" %11.1lf\n", double(c1)); }
schur
unitrmatrixschur
function/************************************************************************* Subroutine performing the Schur decomposition of a general matrix by using the QR algorithm with multiple shifts. The source matrix A is represented as S'*A*S = T, where S is an orthogonal matrix (Schur vectors), T - upper quasi-triangular matrix (with blocks of sizes 1x1 and 2x2 on the main diagonal). Input parameters: A - matrix to be decomposed. Array whose indexes range within [0..N-1, 0..N-1]. N - size of A, N>=0. Output parameters: A - contains matrix T. Array whose indexes range within [0..N-1, 0..N-1]. S - contains Schur vectors. Array whose indexes range within [0..N-1, 0..N-1]. Note 1: The block structure of matrix T can be easily recognized: since all the elements below the blocks are zeros, the elements a[i+1,i] which are equal to 0 show the block border. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms in linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. Result: True, if the algorithm has converged and parameters A and S contain the result. False, if the algorithm has not converged. Algorithm implemented on the basis of the DHSEQR subroutine (LAPACK 3.0 library). *************************************************************************/bool rmatrixschur(ap::real_2d_array& a, int n, ap::real_2d_array& s);
sdet
unitsmatrixdet
function/************************************************************************* Determinant calculation of the symmetric matrix Input parameters: A - matrix. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper = True, then symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used by subroutine. Similarly, if IsUpper = False, then A is given by its lower triangle. Result: determinant of matrix A. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/double smatrixdet(ap::real_2d_array a, int n, bool isupper);
smatrixldltdet
function/************************************************************************* Determinant calculation of the matrix given by LDLT decomposition. Input parameters: A - LDLT-decomposition of the matrix, output of subroutine SMatrixLDLT. Pivots - table of permutations which were made during LDLT decomposition, output of subroutine SMatrixLDLT. N - size of matrix A. IsUpper - matrix storage format. The value is equal to the input parameter of subroutine SMatrixLDLT. Result: matrix determinant. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/double smatrixldltdet(const ap::real_2d_array& a, const ap::integer_1d_array& pivots, int n, bool isupper);
sinverse
unitsmatrixinverse
function/************************************************************************* Inversion of a symmetric indefinite matrix Given a lower or upper triangle of matrix A, the algorithm generates matrix A^-1 and saves the lower or upper triangle depending on the input. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then the upper triangle of matrix A is given, otherwise the lower triangle is given. Output parameters: A - inverse of matrix A. Array with elements [0..N-1, 0..N-1]. If IsUpper = True, then A contains the upper triangle of matrix A^-1, and the elements below the main diagonal are not used nor changed. The same applies if IsUpper = False. Result: True, if the matrix is not singular. False, if the matrix is singular and could not be inverted. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University March 31, 1993 *************************************************************************/bool smatrixinverse(ap::real_2d_array& a, int n, bool isupper);
smatrixldltinverse
function/************************************************************************* Inversion of a symmetric indefinite matrix The algorithm gets an LDLT-decomposition as an input, generates matrix A^-1 and saves the lower or upper triangle of an inverse matrix depending on the input (U*D*U' or L*D*L'). Input parameters: A - LDLT-decomposition of the matrix, Output of subroutine SMatrixLDLT. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then the symmetric matrix is given as decomposition A = U*D*U' and this decomposition is stored in the upper triangle of matrix A and on the main diagonal, and the lower triangle of matrix A is not used. Pivots - a table of permutations, output of subroutine SMatrixLDLT. Output parameters: A - inverse of the matrix, whose LDLT-decomposition was stored in matrix A as a subroutine input. Array with elements [0..N-1, 0..N-1]. If IsUpper = True, then A contains the upper triangle of matrix A^-1, and the elements below the main diagonal are not used nor changed. The same applies if IsUpper = False. Result: True, if the matrix is not singular. False, if the matrix is singular and could not be inverted. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University March 31, 1993 *************************************************************************/bool smatrixldltinverse(ap::real_2d_array& a, const ap::integer_1d_array& pivots, int n, bool isupper);
spdgevd
unitsmatrixgevd
function/************************************************************************* Algorithm for solving the following generalized symmetric positive-definite eigenproblem: A*x = lambda*B*x (1) or A*B*x = lambda*x (2) or B*A*x = lambda*x (3). where A is a symmetric matrix, B - symmetric positive-definite matrix. The problem is solved by reducing it to an ordinary symmetric eigenvalue problem. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrices A and B. IsUpperA - storage format of matrix A. B - symmetric positive-definite matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. IsUpperB - storage format of matrix B. ZNeeded - if ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. ProblemType - if ProblemType is equal to: * 1, the following problem is solved: A*x = lambda*B*x; * 2, the following problem is solved: A*B*x = lambda*x; * 3, the following problem is solved: B*A*x = lambda*x. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in matrix columns. It should be noted that the eigenvectors in such problems do not form an orthogonal system. Result: True, if the problem was solved successfully. False, if the error occurred during the Cholesky decomposition of matrix B (the matrix isn’t positive-definite) or during the work of the iterative algorithm for solving the symmetric eigenproblem. See also the GeneralizedSymmetricDefiniteEVDReduce subroutine. -- ALGLIB -- Copyright 1.28.2006 by Bochkanov Sergey *************************************************************************/bool smatrixgevd(ap::real_2d_array a, int n, bool isuppera, const ap::real_2d_array& b, bool isupperb, int zneeded, int problemtype, ap::real_1d_array& d, ap::real_2d_array& z);
smatrixgevdreduce
function/************************************************************************* Algorithm for reduction of the following generalized symmetric positive- definite eigenvalue problem: A*x = lambda*B*x (1) or A*B*x = lambda*x (2) or B*A*x = lambda*x (3) to the symmetric eigenvalues problem C*y = lambda*y (eigenvalues of this and the given problems are the same, and the eigenvectors of the given problem could be obtained by multiplying the obtained eigenvectors by the transformation matrix x = R*y). Here A is a symmetric matrix, B - symmetric positive-definite matrix. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrices A and B. IsUpperA - storage format of matrix A. B - symmetric positive-definite matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. IsUpperB - storage format of matrix B. ProblemType - if ProblemType is equal to: * 1, the following problem is solved: A*x = lambda*B*x; * 2, the following problem is solved: A*B*x = lambda*x; * 3, the following problem is solved: B*A*x = lambda*x. Output parameters: A - symmetric matrix which is given by its upper or lower triangle depending on IsUpperA. Contains matrix C. Array whose indexes range within [0..N-1, 0..N-1]. R - upper triangular or low triangular transformation matrix which is used to obtain the eigenvectors of a given problem as the product of eigenvectors of C (from the right) and matrix R (from the left). If the matrix is upper triangular, the elements below the main diagonal are equal to 0 (and vice versa). Thus, we can perform the multiplication without taking into account the internal structure (which is an easier though less effective way). Array whose indexes range within [0..N-1, 0..N-1]. IsUpperR - type of matrix R (upper or lower triangular). Result: True, if the problem was reduced successfully. False, if the error occurred during the Cholesky decomposition of matrix B (the matrix is not positive-definite). -- ALGLIB -- Copyright 1.28.2006 by Bochkanov Sergey *************************************************************************/bool smatrixgevdreduce(ap::real_2d_array& a, int n, bool isuppera, const ap::real_2d_array& b, bool isupperb, int problemtype, ap::real_2d_array& r, bool& isupperr);
spline1d
unitspline1dfitreport
structure/************************************************************************* Spline fitting report: TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/struct spline1dfitreport { double taskrcond; double rmserror; double avgerror; double avgrelerror; double maxerror; };
spline1dinterpolant
structure/************************************************************************* 1-dimensional spline inteprolant *************************************************************************/struct spline1dinterpolant { bool periodic; int n; int k; ap::real_1d_array x; ap::real_1d_array c; };
spline1dbuildakima
function/************************************************************************* This subroutine builds Akima spline interpolant INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count, N>=5 OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dbuildakima(ap::real_1d_array x, ap::real_1d_array y, int n, spline1dinterpolant& c);
spline1dbuildcatmullrom
function/************************************************************************* This subroutine builds Catmull-Rom spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Y - function values, array[0..N-1]. N - points count, N>=2 BoundType - boundary condition type: * -1 for periodic boundary condition * 0 for parabolically terminated spline Tension - tension parameter: * tension=0 corresponds to classic Catmull-Rom spline * 0<tension<1 corresponds to more general form - cardinal spline OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dbuildcatmullrom(ap::real_1d_array x, ap::real_1d_array y, int n, int boundtype, double tension, spline1dinterpolant& c);
spline1dbuildcubic
function/************************************************************************* This subroutine builds cubic spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Y - function values, array[0..N-1]. N - points count, N>=2 BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dbuildcubic(ap::real_1d_array x, ap::real_1d_array y, int n, int boundltype, double boundl, int boundrtype, double boundr, spline1dinterpolant& c);
Examples: spline1d_calc spline1d_cubic
spline1dbuildhermite
function/************************************************************************* This subroutine builds Hermite spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] D - derivatives, array[0..N-1] N - points count, N>=2 OUTPUT PARAMETERS: C - spline interpolant. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dbuildhermite(ap::real_1d_array x, ap::real_1d_array y, ap::real_1d_array d, int n, spline1dinterpolant& c);
Examples: spline1d_hermite
spline1dbuildlinear
function/************************************************************************* This subroutine builds linear spline interpolant INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count, N>=2 OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dbuildlinear(ap::real_1d_array x, ap::real_1d_array y, int n, spline1dinterpolant& c);
Examples: spline1d_linear
spline1dcalc
function/************************************************************************* This subroutine calculates the value of the spline at the given point X. INPUT PARAMETERS: C - spline interpolant X - point Result: S(x) -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/double spline1dcalc(const spline1dinterpolant& c, double x);
Examples: spline1d_calc spline1d_cubic spline1d_hermite spline1d_linear
spline1dcopy
function/************************************************************************* This subroutine makes the copy of the spline. INPUT PARAMETERS: C - spline interpolant. Result: CC - spline copy -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dcopy(const spline1dinterpolant& c, spline1dinterpolant& cc);
spline1ddiff
function/************************************************************************* This subroutine differentiates the spline. INPUT PARAMETERS: C - spline interpolant. X - point Result: S - S(x) DS - S'(x) D2S - S''(x) -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/void spline1ddiff(const spline1dinterpolant& c, double x, double& s, double& ds, double& d2s);
Examples: spline1d_calc
spline1dfitcubic
function/************************************************************************* Least squares fitting by cubic spline. This subroutine is "lightweight" alternative for more complex and feature- rich Spline1DFitCubicWC(). See Spline1DFitCubicWC() for more information about subroutine parameters (we don't duplicate it here because of length) -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void spline1dfitcubic(const ap::real_1d_array& x, const ap::real_1d_array& y, int n, int m, int& info, spline1dinterpolant& s, spline1dfitreport& rep);
Examples: spline1d_fit
spline1dfitcubicwc
function/************************************************************************* Weighted fitting by cubic spline, with constraints on function values or derivatives. Equidistant grid with M-2 nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are cubic splines with continuous second derivatives and non-fixed first derivatives at interval ends. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO Spline1DFitHermiteWC() - fitting by Hermite splines (more flexible, less smooth) Spline1DFitCubic() - "lightweight" fitting by cubic splines, without invididual weights and constraints INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions ( = number_of_nodes+2), M>=4. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -1 means another errors in parameters passed (N<=0, for example) S - spline interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function values AND/OR its derivatives at the interval boundaries. * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void spline1dfitcubicwc(const ap::real_1d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& w, int n, const ap::real_1d_array& xc, const ap::real_1d_array& yc, const ap::integer_1d_array& dc, int k, int m, int& info, spline1dinterpolant& s, spline1dfitreport& rep);
Examples: spline1d_fitc
spline1dfithermite
function/************************************************************************* Least squares fitting by Hermite spline. This subroutine is "lightweight" alternative for more complex and feature- rich Spline1DFitHermiteWC(). See Spline1DFitHermiteWC() description for more information about subroutine parameters (we don't duplicate it here because of length). -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void spline1dfithermite(const ap::real_1d_array& x, const ap::real_1d_array& y, int n, int m, int& info, spline1dinterpolant& s, spline1dfitreport& rep);
Examples: spline1d_fit
spline1dfithermitewc
function/************************************************************************* Weighted fitting by Hermite spline, with constraints on function values or first derivatives. Equidistant grid with M nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are Hermite splines. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO Spline1DFitCubicWC() - fitting by Cubic splines (less flexible, more smooth) Spline1DFitHermite() - "lightweight" Hermite fitting, without invididual weights and constraints INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions (= 2 * number of nodes), M>=4, M IS EVEN! OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -2 means odd M was passed (which is not supported) -1 means another errors in parameters passed (N<=0, for example) S - spline interpolant. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. IMPORTANT: this subroitine supports only even M's ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the several special cases, however, we can guarantee consistency. * one of this cases is M>=4 and constraints on the function value (AND/OR its derivative) at the interval boundaries. * another special case is M>=4 and ONE constraint on the function value (OR, BUT NOT AND, derivative) anywhere in [min(x),max(x)] Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void spline1dfithermitewc(const ap::real_1d_array& x, const ap::real_1d_array& y, const ap::real_1d_array& w, int n, const ap::real_1d_array& xc, const ap::real_1d_array& yc, const ap::integer_1d_array& dc, int k, int m, int& info, spline1dinterpolant& s, spline1dfitreport& rep);
Examples: spline1d_fitc
spline1dintegrate
function/************************************************************************* This subroutine integrates the spline. INPUT PARAMETERS: C - spline interpolant. X - right bound of the integration interval [a, x], here 'a' denotes min(x[]) Result: integral(S(t)dt,a,x) -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/double spline1dintegrate(const spline1dinterpolant& c, double x);
Examples: spline1d_calc
spline1dlintransx
function/************************************************************************* This subroutine performs linear transformation of the spline argument. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: x = A*t + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dlintransx(spline1dinterpolant& c, double a, double b);
spline1dlintransy
function/************************************************************************* This subroutine performs linear transformation of the spline. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: S2(x) = A*S(x) + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dlintransy(spline1dinterpolant& c, double a, double b);
spline1dunpack
function/************************************************************************* This subroutine unpacks the spline into the coefficients table. INPUT PARAMETERS: C - spline interpolant. X - point Result: Tbl - coefficients table, unpacked format, array[0..N-2, 0..5]. For I = 0...N-2: Tbl[I,0] = X[i] Tbl[I,1] = X[i+1] Tbl[I,2] = C0 Tbl[I,3] = C1 Tbl[I,4] = C2 Tbl[I,5] = C3 On [x[i], x[i+1]] spline is equals to: S(x) = C0 + C1*t + C2*t^2 + C3*t^3 t = x-x[i] -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/void spline1dunpack(const spline1dinterpolant& c, int& n, ap::real_2d_array& tbl);
ap::real_1d_array x; ap::real_1d_array y; int n; int i; double t; spline1dinterpolant s; double v; double dv; double d2v; double err; double maxerr; // // Demonstration of Spline1DCalc(), Spline1DDiff(), Spline1DIntegrate() // printf("DEMONSTRATION OF Spline1DCalc(), Spline1DDiff(), Spline1DIntegrate()\n\n"); printf("F(x)=sin(x), [0, pi]\n"); printf("Natural cubic spline with 3 nodes is used\n\n"); // // Create spline // n = 3; x.setlength(n); y.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); } spline1dbuildcubic(x, y, n, 2, 0.0, 2, 0.0, s); // // Output results // spline1ddiff(s, double(0), v, dv, d2v); printf(" S(x) F(x) \n"); printf("function %6.3lf %6.3lf \n", double(spline1dcalc(s, double(0))), double(0)); printf("d/dx(0) %6.3lf %6.3lf \n", double(dv), double(1)); printf("d2/dx2(0) %6.3lf %6.3lf \n", double(d2v), double(0)); printf("integral(0,pi) %6.3lf %6.3lf \n", double(spline1dintegrate(s, ap::pi())), double(2)); printf("\n\n");
ap::real_1d_array x; ap::real_1d_array y; int n; int i; double t; spline1dinterpolant s; double err; double maxerr; // // Interpolation by natural Cubic spline. // printf("INTERPOLATION BY NATURAL CUBIC SPLINE\n\n"); printf("F(x)=sin(x), [0, pi], 3 nodes\n\n"); printf(" x F(x) S(x) Error\n"); // // Create spline // n = 3; x.setlength(n); y.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); } spline1dbuildcubic(x, y, n, 1, double(+1), 1, double(-1), s); // // Output results // t = 0; maxerr = 0; while(ap::fp_less(t,0.999999*ap::pi())) { err = fabs(spline1dcalc(s, t)-sin(t)); maxerr = ap::maxreal(err, maxerr); printf("%6.3lf %6.3lf %6.3lf %6.3lf\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(err)); t = ap::minreal(ap::pi(), t+0.25); } err = fabs(spline1dcalc(s, ap::pi())-sin(ap::pi())); maxerr = ap::maxreal(err, maxerr); printf("%6.3lf %6.3lf %6.3lf %6.3lf\n\n", double(ap::pi()), double(sin(ap::pi())), double(spline1dcalc(s, ap::pi())), double(err)); printf("max|error| = %0.3lf\n", double(maxerr)); printf("Try other demos (spline1d_linear, spline1d_hermite) and compare errors...\n\n\n");
ap::real_1d_array x; ap::real_1d_array y; int n; int i; int info; spline1dinterpolant s; double t; spline1dfitreport rep; // // Fitting by unconstrained natural cubic spline // printf("FITTING BY UNCONSTRAINED NATURAL CUBIC SPLINE\n\n"); printf("F(x)=sin(x) function being fitted\n"); printf("[0, pi] interval\n"); printf("M=4 number of basis functions to use\n"); printf("N=100 number of points to fit\n"); // // Create and fit // n = 100; x.setlength(n); y.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); } spline1dfitcubic(x, y, n, 4, info, s, rep); // // Output results // if( info>0 ) { printf("\nOK, we have finished\n\n"); printf(" x F(x) S(x) Error\n"); t = 0; while(ap::fp_less(t,0.999999*ap::pi())) { printf("%6.3lf %6.3lf %6.3lf %6.3lf\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(fabs(spline1dcalc(s, t)-sin(t)))); t = ap::minreal(ap::pi(), t+0.25); } printf("%6.3lf %6.3lf %6.3lf %6.3lf\n\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(fabs(spline1dcalc(s, t)-sin(t)))); printf("rms error is %6.3lf\n", double(rep.rmserror)); printf("max error is %6.3lf\n", double(rep.maxerror)); } else { printf("\nSomething wrong, Info=%0ld", long(info)); }
ap::real_1d_array x; ap::real_1d_array y; ap::real_1d_array w; ap::real_1d_array xc; ap::real_1d_array yc; ap::integer_1d_array dc; int n; int i; int info; spline1dinterpolant s; double t; spline1dfitreport rep; // // Fitting by constrained Hermite spline // printf("FITTING BY CONSTRAINED HERMITE SPLINE\n\n"); printf("F(x)=sin(x) function being fitted\n"); printf("[0, pi] interval\n"); printf("M=6 number of basis functions to use\n"); printf("S(0)=0 first constraint\n"); printf("S(pi)=0 second constraint\n"); printf("N=100 number of points to fit\n"); // // Create and fit: // * X contains points // * Y contains values // * W contains weights // * XC contains constraints locations // * YC contains constraints values // * DC contains derivative indexes (0 = constrained function value) // n = 100; x.setlength(n); y.setlength(n); w.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); w(i) = 1; } xc.setlength(2); yc.setlength(2); dc.setlength(2); xc(0) = 0; yc(0) = 0; dc(0) = 0; xc(0) = ap::pi(); yc(0) = 0; dc(0) = 0; spline1dfithermitewc(x, y, w, n, xc, yc, dc, 2, 6, info, s, rep); // // Output results // if( info>0 ) { printf("\nOK, we have finished\n\n"); printf(" x F(x) S(x) Error\n"); t = 0; while(ap::fp_less(t,0.999999*ap::pi())) { printf("%6.3lf %6.3lf %6.3lf %6.3lf\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(fabs(spline1dcalc(s, t)-sin(t)))); t = ap::minreal(ap::pi(), t+0.25); } printf("%6.3lf %6.3lf %6.3lf %6.3lf\n\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(fabs(spline1dcalc(s, t)-sin(t)))); printf("rms error is %6.3lf\n", double(rep.rmserror)); printf("max error is %6.3lf\n", double(rep.maxerror)); printf("S(0) = S(pi) = 0 (exactly)\n\n"); } else { printf("\nSomething wrong, Info=%0ld", long(info)); }
ap::real_1d_array x; ap::real_1d_array y; ap::real_1d_array d; int n; int i; double t; spline1dinterpolant s; double err; double maxerr; // // Interpolation by natural Cubic spline. // printf("INTERPOLATION BY HERMITE SPLINE\n\n"); printf("F(x)=sin(x), [0, pi], 3 nodes\n\n"); printf(" x F(x) S(x) Error\n"); // // Create spline // n = 3; x.setlength(n); y.setlength(n); d.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); d(i) = cos(x(i)); } spline1dbuildhermite(x, y, d, n, s); // // Output results // t = 0; maxerr = 0; while(ap::fp_less(t,0.999999*ap::pi())) { err = fabs(spline1dcalc(s, t)-sin(t)); maxerr = ap::maxreal(err, maxerr); printf("%6.3lf %6.3lf %6.3lf %6.3lf\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(err)); t = ap::minreal(ap::pi(), t+0.25); } err = fabs(spline1dcalc(s, ap::pi())-sin(ap::pi())); maxerr = ap::maxreal(err, maxerr); printf("%6.3lf %6.3lf %6.3lf %6.3lf\n\n", double(ap::pi()), double(sin(ap::pi())), double(spline1dcalc(s, ap::pi())), double(err)); printf("max|error| = %0.3lf\n", double(maxerr)); printf("Try other demos (spline1d_linear, spline1d_cubic) and compare errors...\n\n\n");
ap::real_1d_array x; ap::real_1d_array y; int n; int i; double t; spline1dinterpolant s; double err; double maxerr; // // Interpolation by linear spline. // printf("INTERPOLATION BY LINEAR SPLINE\n\n"); printf("F(x)=sin(x), [0, pi], 3 nodes\n\n"); printf(" x F(x) S(x) Error\n"); // // Create spline // n = 3; x.setlength(n); y.setlength(n); for(i = 0; i <= n-1; i++) { x(i) = ap::pi()*i/(n-1); y(i) = sin(x(i)); } spline1dbuildlinear(x, y, n, s); // // Output results // t = 0; maxerr = 0; while(ap::fp_less(t,0.999999*ap::pi())) { err = fabs(spline1dcalc(s, t)-sin(t)); maxerr = ap::maxreal(err, maxerr); printf("%6.3lf %6.3lf %6.3lf %6.3lf\n", double(t), double(sin(t)), double(spline1dcalc(s, t)), double(err)); t = ap::minreal(ap::pi(), t+0.25); } err = fabs(spline1dcalc(s, ap::pi())-sin(ap::pi())); maxerr = ap::maxreal(err, maxerr); printf("%6.3lf %6.3lf %6.3lf %6.3lf\n\n", double(ap::pi()), double(sin(ap::pi())), double(spline1dcalc(s, ap::pi())), double(err)); printf("max|error| = %0.3lf\n", double(maxerr)); printf("Try other demos (spline1d_hermite, spline1d_cubic) and compare errors...\n\n\n");
spline2d
unitspline2dinterpolant
structure/************************************************************************* 2-dimensional spline inteprolant *************************************************************************/struct spline2dinterpolant { int k; ap::real_1d_array c; };
spline2dbuildbicubic
function/************************************************************************* This subroutine builds bicubic spline coefficients table. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M-1,0..N-1] M,N - grid size, M>=2, N>=2 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/void spline2dbuildbicubic(ap::real_1d_array x, ap::real_1d_array y, ap::real_2d_array f, int m, int n, spline2dinterpolant& c);
spline2dbuildbilinear
function/************************************************************************* This subroutine builds bilinear spline coefficients table. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M-1,0..N-1] M,N - grid size, M>=2, N>=2 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/void spline2dbuildbilinear(ap::real_1d_array x, ap::real_1d_array y, ap::real_2d_array f, int m, int n, spline2dinterpolant& c);
spline2dcalc
function/************************************************************************* This subroutine calculates the value of the bilinear or bicubic spline at the given point X. Input parameters: C - coefficients table. Built by BuildBilinearSpline or BuildBicubicSpline. X, Y- point Result: S(x,y) -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/double spline2dcalc(const spline2dinterpolant& c, double x, double y);
spline2dcopy
function/************************************************************************* This subroutine makes the copy of the spline model. Input parameters: C - spline interpolant Output parameters: CC - spline copy -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/void spline2dcopy(const spline2dinterpolant& c, spline2dinterpolant& cc);
spline2ddiff
function/************************************************************************* This subroutine calculates the value of the bilinear or bicubic spline at the given point X and its derivatives. Input parameters: C - spline interpolant. X, Y- point Output parameters: F - S(x,y) FX - dS(x,y)/dX FY - dS(x,y)/dY FXY - d2S(x,y)/dXdY -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/void spline2ddiff(const spline2dinterpolant& c, double x, double y, double& f, double& fx, double& fy, double& fxy);
spline2dlintransf
function/************************************************************************* This subroutine performs linear transformation of the spline. Input parameters: C - spline interpolant. A, B- transformation coefficients: S2(x,y) = A*S(x,y) + B Output parameters: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/void spline2dlintransf(spline2dinterpolant& c, double a, double b);
spline2dlintransxy
function/************************************************************************* This subroutine performs linear transformation of the spline argument. Input parameters: C - spline interpolant AX, BX - transformation coefficients: x = A*t + B AY, BY - transformation coefficients: y = A*u + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/void spline2dlintransxy(spline2dinterpolant& c, double ax, double bx, double ay, double by);
spline2dresamplebicubic
function/************************************************************************* Bicubic spline resampling Input parameters: A - function values at the old grid, array[0..OldHeight-1, 0..OldWidth-1] OldHeight - old grid height, OldHeight>1 OldWidth - old grid width, OldWidth>1 NewHeight - new grid height, NewHeight>1 NewWidth - new grid width, NewWidth>1 Output parameters: B - function values at the new grid, array[0..NewHeight-1, 0..NewWidth-1] -- ALGLIB routine -- 15 May, 2007 Copyright by Bochkanov Sergey *************************************************************************/void spline2dresamplebicubic(const ap::real_2d_array& a, int oldheight, int oldwidth, ap::real_2d_array& b, int newheight, int newwidth);
spline2dresamplebilinear
function/************************************************************************* Bilinear spline resampling Input parameters: A - function values at the old grid, array[0..OldHeight-1, 0..OldWidth-1] OldHeight - old grid height, OldHeight>1 OldWidth - old grid width, OldWidth>1 NewHeight - new grid height, NewHeight>1 NewWidth - new grid width, NewWidth>1 Output parameters: B - function values at the new grid, array[0..NewHeight-1, 0..NewWidth-1] -- ALGLIB routine -- 09.07.2007 Copyright by Bochkanov Sergey *************************************************************************/void spline2dresamplebilinear(const ap::real_2d_array& a, int oldheight, int oldwidth, ap::real_2d_array& b, int newheight, int newwidth);
spline2dserialize
function/************************************************************************* Serialization of the spline interpolant INPUT PARAMETERS: B - spline interpolant OUTPUT PARAMETERS: RA - array of real numbers which contains interpolant, array[0..RLen-1] RLen - RA lenght -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void spline2dserialize(const spline2dinterpolant& c, ap::real_1d_array& ra, int& ralen);
spline2dunpack
function/************************************************************************* This subroutine unpacks two-dimensional spline into the coefficients table Input parameters: C - spline interpolant. Result: M, N- grid size (x-axis and y-axis) Tbl - coefficients table, unpacked format, [0..(N-1)*(M-1)-1, 0..19]. For I = 0...M-2, J=0..N-2: K = I*(N-1)+J Tbl[K,0] = X[j] Tbl[K,1] = X[j+1] Tbl[K,2] = Y[i] Tbl[K,3] = Y[i+1] Tbl[K,4] = C00 Tbl[K,5] = C01 Tbl[K,6] = C02 Tbl[K,7] = C03 Tbl[K,8] = C10 Tbl[K,9] = C11 ... Tbl[K,19] = C33 On each grid square spline is equals to: S(x) = SUM(c[i,j]*(x^i)*(y^j), i=0..3, j=0..3) t = x-x[j] u = y-y[i] -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/void spline2dunpack(const spline2dinterpolant& c, int& m, int& n, ap::real_2d_array& tbl);
spline2dunserialize
function/************************************************************************* Unserialization of the spline interpolant INPUT PARAMETERS: RA - array of real numbers which contains interpolant, OUTPUT PARAMETERS: B - spline interpolant -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void spline2dunserialize(const ap::real_1d_array& ra, spline2dinterpolant& c);
srcond
unitsmatrixldltrcond
function/************************************************************************* Condition number estimate of a matrix given by LDLT-decomposition The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: L - LDLT-decomposition of matrix A given by the upper or lower triangle depending on IsUpper. Output of SMatrixLDLT subroutine. Pivots - table of permutations which were made during LDLT-decomposition, Output of SMatrixLDLT subroutine. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)) *************************************************************************/double smatrixldltrcond(const ap::real_2d_array& l, const ap::integer_1d_array& pivots, int n, bool isupper);
smatrixrcond
function/************************************************************************* Condition number estimate of a symmetric matrix The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - symmetric definite matrix which is given by its upper or lower triangle depending on IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)) *************************************************************************/double smatrixrcond(const ap::real_2d_array& a, int n, bool isupper);
ssolve
unitsmatrixldltsolve
function/************************************************************************* Solving a system of linear equations with a system matrix given by its LDLT decomposition The algorithm solves systems with a square matrix only. Input parameters: A - LDLT decomposition of the matrix (the result of the SMatrixLDLT subroutine). Pivots - row permutation table (the result of the SMatrixLDLT subroutine). B - right side of a system. Array whose index ranges within [0..N-1]. N - size of matrix A. IsUpper - points to the triangle of matrix A in which the LDLT decomposition is stored. If IsUpper=True, the decomposition has the form of U*D*U', matrix U is stored in the upper triangle of matrix A (in that case, the lower triangle isn't used and isn't changed by the subroutine). Similarly, if IsUpper=False, the decomposition has the form of L*D*L' and the lower triangle stores matrix L. Output parameters: X - solution of a system. Array whose index ranges within [0..N-1]. Result: True, if the matrix is not singular. X contains the solution. False, if the matrix is singular (the determinant of matrix D is equal to 0). In this case, X doesn't contain a solution. *************************************************************************/bool smatrixldltsolve(const ap::real_2d_array& a, const ap::integer_1d_array& pivots, ap::real_1d_array b, int n, bool isupper, ap::real_1d_array& x);
smatrixsolve
function/************************************************************************* Solving a system of linear equations with a symmetric system matrix Input parameters: A - system matrix (upper or lower triangle). Array whose indexes range within [0..N-1, 0..N-1]. B - right side of a system. Array whose index ranges within [0..N-1]. N - size of matrix A. IsUpper - If IsUpper = True, A contains the upper triangle, otherwise A contains the lower triangle. Output parameters: X - solution of a system. Array whose index ranges within [0..N-1]. Result: True, if the matrix is not singular. X contains the solution. False, if the matrix is singular (the determinant of the matrix is equal to 0). In this case, X doesn't contain a solution. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/bool smatrixsolve(ap::real_2d_array a, const ap::real_1d_array& b, int n, bool isupper, ap::real_1d_array& x);
stest
unitonesamplesigntest
function/************************************************************************* Sign test This test checks three hypotheses about the median of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the median is equal to the given value) * left-tailed test (null hypothesis - the median is greater than or equal to the given value) * right-tailed test (null hypothesis - the median is less than or equal to the given value) Requirements: * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distribution X to be normal Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. Median - assumed median value. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. While calculating p-values high-precision binomial distribution approximation is used, so significance levels have about 15 exact digits. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/void onesamplesigntest(const ap::real_1d_array& x, int n, double median, double& bothtails, double& lefttail, double& righttail);
studenttdistr
unitinvstudenttdistribution
function/************************************************************************* Functional inverse of Student's t distribution Given probability p, finds the argument t such that stdtr(k,t) is equal to p. ACCURACY: Tested at random 1 <= k <= 100. The "domain" refers to p: Relative error: arithmetic domain # trials peak rms IEEE .001,.999 25000 5.7e-15 8.0e-16 IEEE 10^-6,.001 25000 2.0e-12 2.9e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double invstudenttdistribution(int k, double p);
studenttdistribution
function/************************************************************************* Student's t distribution Computes the integral from minus infinity to t of the Student t distribution with integer k > 0 degrees of freedom: t - | | - | 2 -(k+1)/2 | ( (k+1)/2 ) | ( x ) ---------------------- | ( 1 + --- ) dx - | ( k ) sqrt( k pi ) | ( k/2 ) | | | - -inf. Relation to incomplete beta integral: 1 - stdtr(k,t) = 0.5 * incbet( k/2, 1/2, z ) where z = k/(k + t**2). For t < -2, this is the method of computation. For higher t, a direct method is derived from integration by parts. Since the function is symmetric about t=0, the area under the right tail of the density is found by calling the function with -t instead of t. ACCURACY: Tested at random 1 <= k <= 25. The "domain" refers to t. Relative error: arithmetic domain # trials peak rms IEEE -100,-2 50000 5.9e-15 1.4e-15 IEEE -2,100 500000 2.7e-15 4.9e-17 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double studenttdistribution(int k, double t);
studentttests
unitstudentttest1
function/************************************************************************* One-sample t-test This test checks three hypotheses about the mean of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the mean is equal to the given value) * left-tailed test (null hypothesis - the mean is greater than or equal to the given value) * right-tailed test (null hypothesis - the mean is less than or equal to the given value). The test is based on the assumption that a given sample has a normal distribution and an unknown dispersion. If the distribution sharply differs from normal, the test will work incorrectly. Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of sample. Mean - assumed value of the mean. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/void studentttest1(const ap::real_1d_array& x, int n, double mean, double& bothtails, double& lefttail, double& righttail);
studentttest2
function/************************************************************************* Two-sample pooled test This test checks three hypotheses about the mean of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the means are equal) * left-tailed test (null hypothesis - the mean of the first sample is greater than or equal to the mean of the second sample) * right-tailed test (null hypothesis - the mean of the first sample is less than or equal to the mean of the second sample). Test is based on the following assumptions: * given samples have normal distributions * dispersions are equal * samples are independent. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of sample. Y - sample 2. Array whose index goes from 0 to M-1. M - size of sample. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 18.09.2006 by Bochkanov Sergey *************************************************************************/void studentttest2(const ap::real_1d_array& x, int n, const ap::real_1d_array& y, int m, double& bothtails, double& lefttail, double& righttail);
unequalvariancettest
function/************************************************************************* Two-sample unpooled test This test checks three hypotheses about the mean of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the means are equal) * left-tailed test (null hypothesis - the mean of the first sample is greater than or equal to the mean of the second sample) * right-tailed test (null hypothesis - the mean of the first sample is less than or equal to the mean of the second sample). Test is based on the following assumptions: * given samples have normal distributions * samples are independent. Dispersion equality is not required Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 18.09.2006 by Bochkanov Sergey *************************************************************************/void unequalvariancettest(const ap::real_1d_array& x, int n, const ap::real_1d_array& y, int m, double& bothtails, double& lefttail, double& righttail);
svd
unitrmatrixsvd
function/************************************************************************* Singular value decomposition of a rectangular matrix. The algorithm calculates the singular value decomposition of a matrix of size MxN: A = U * S * V^T The algorithm finds the singular values and, optionally, matrices U and V^T. The algorithm can find both first min(M,N) columns of matrix U and rows of matrix V^T (singular vectors), and matrices U and V^T wholly (of sizes MxM and NxN respectively). Take into account that the subroutine does not return matrix V but V^T. Input parameters: A - matrix to be decomposed. Array whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. UNeeded - 0, 1 or 2. See the description of the parameter U. VTNeeded - 0, 1 or 2. See the description of the parameter VT. AdditionalMemory - If the parameter: * equals 0, the algorithm doesn’t use additional memory (lower requirements, lower performance). * equals 1, the algorithm uses additional memory of size min(M,N)*min(M,N) of real numbers. It often speeds up the algorithm. * equals 2, the algorithm uses additional memory of size M*min(M,N) of real numbers. It allows to get a maximum performance. The recommended value of the parameter is 2. Output parameters: W - contains singular values in descending order. U - if UNeeded=0, U isn't changed, the left singular vectors are not calculated. if Uneeded=1, U contains left singular vectors (first min(M,N) columns of matrix U). Array whose indexes range within [0..M-1, 0..Min(M,N)-1]. if UNeeded=2, U contains matrix U wholly. Array whose indexes range within [0..M-1, 0..M-1]. VT - if VTNeeded=0, VT isn’t changed, the right singular vectors are not calculated. if VTNeeded=1, VT contains right singular vectors (first min(M,N) rows of matrix V^T). Array whose indexes range within [0..min(M,N)-1, 0..N-1]. if VTNeeded=2, VT contains matrix V^T wholly. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/bool rmatrixsvd(ap::real_2d_array a, int m, int n, int uneeded, int vtneeded, int additionalmemory, ap::real_1d_array& w, ap::real_2d_array& u, ap::real_2d_array& vt);
trfac
unitcmatrixlu
function/************************************************************************* LU decomposition of a general complex matrix with row pivoting A is represented as A = P*L*U, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=min(M,N)-1, Pi - permutation matrix for I and Pivots[I] This is cache-oblivous implementation of LU decomposition. It is optimized for square matrices. As for rectangular matrices: * best case - M>>N * worst case - N>>M, small M, large N, matrix does not fit in CPU cache INPUT PARAMETERS: A - array[0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. OUTPUT PARAMETERS: A - matrices L and U in compact form: * L is stored under main diagonal * U is stored on and above main diagonal Pivots - permutation matrix in compact form. array[0..Min(M-1,N-1)]. -- ALGLIB routine -- 10.01.2010 Bochkanov Sergey *************************************************************************/void cmatrixlu(ap::complex_2d_array& a, int m, int n, ap::integer_1d_array& pivots);
hpdmatrixcholesky
function/************************************************************************* Cache-oblivious Cholesky decomposition The algorithm computes Cholesky decomposition of a Hermitian positive- definite matrix. The result of an algorithm is a representation of A as A=U'*U or A=L*L' (here X' detones conj(X^T)). INPUT PARAMETERS: A - upper or lower triangle of a factorized matrix. array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper=True, then A contains an upper triangle of a symmetric matrix, otherwise A contains a lower one. OUTPUT PARAMETERS: A - the result of factorization. If IsUpper=True, then the upper triangle contains matrix U, so that A = U'*U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/bool hpdmatrixcholesky(ap::complex_2d_array& a, int n, bool isupper);
rmatrixlu
function/************************************************************************* LU decomposition of a general real matrix with row pivoting A is represented as A = P*L*U, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=min(M,N)-1, Pi - permutation matrix for I and Pivots[I] This is cache-oblivous implementation of LU decomposition. It is optimized for square matrices. As for rectangular matrices: * best case - M>>N * worst case - N>>M, small M, large N, matrix does not fit in CPU cache INPUT PARAMETERS: A - array[0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. OUTPUT PARAMETERS: A - matrices L and U in compact form: * L is stored under main diagonal * U is stored on and above main diagonal Pivots - permutation matrix in compact form. array[0..Min(M-1,N-1)]. -- ALGLIB routine -- 10.01.2010 Bochkanov Sergey *************************************************************************/void rmatrixlu(ap::real_2d_array& a, int m, int n, ap::integer_1d_array& pivots);
spdmatrixcholesky
function/************************************************************************* Cache-oblivious Cholesky decomposition The algorithm computes Cholesky decomposition of a symmetric positive- definite matrix. The result of an algorithm is a representation of A as A=U^T*U or A=L*L^T INPUT PARAMETERS: A - upper or lower triangle of a factorized matrix. array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper=True, then A contains an upper triangle of a symmetric matrix, otherwise A contains a lower one. OUTPUT PARAMETERS: A - the result of factorization. If IsUpper=True, then the upper triangle contains matrix U, so that A = U^T*U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/bool spdmatrixcholesky(ap::real_2d_array& a, int n, bool isupper);
trigintegrals
unithyperbolicsinecosineintegrals
function/************************************************************************* Hyperbolic sine and cosine integrals Approximates the integrals x - | | cosh t - 1 Chi(x) = eul + ln x + | ----------- dt, | | t - 0 x - | | sinh t Shi(x) = | ------ dt | | t - 0 where eul = 0.57721566490153286061 is Euler's constant. The integrals are evaluated by power series for x < 8 and by Chebyshev expansions for x between 8 and 88. For large x, both functions approach exp(x)/2x. Arguments greater than 88 in magnitude return MAXNUM. ACCURACY: Test interval 0 to 88. Relative error: arithmetic function # trials peak rms IEEE Shi 30000 6.9e-16 1.6e-16 Absolute error, except relative when |Chi| > 1: IEEE Chi 30000 8.4e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/void hyperbolicsinecosineintegrals(double x, double& shi, double& chi);
sinecosineintegrals
function/************************************************************************* Sine and cosine integrals Evaluates the integrals x - | cos t - 1 Ci(x) = eul + ln x + | --------- dt, | t - 0 x - | sin t Si(x) = | ----- dt | t - 0 where eul = 0.57721566490153286061 is Euler's constant. The integrals are approximated by rational functions. For x > 8 auxiliary functions f(x) and g(x) are employed such that Ci(x) = f(x) sin(x) - g(x) cos(x) Si(x) = pi/2 - f(x) cos(x) - g(x) sin(x) ACCURACY: Test interval = [0,50]. Absolute error, except relative when > 1: arithmetic function # trials peak rms IEEE Si 30000 4.4e-16 7.3e-17 IEEE Ci 30000 6.9e-16 5.1e-17 Cephes Math Library Release 2.1: January, 1989 Copyright 1984, 1987, 1989 by Stephen L. Moshier *************************************************************************/void sinecosineintegrals(double x, double& si, double& ci);
variancetests
unitftest
function/************************************************************************* Two-sample F-test This test checks three hypotheses about dispersions of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the dispersions are equal) * left-tailed test (null hypothesis - the dispersion of the first sample is greater than or equal to the dispersion of the second sample). * right-tailed test (null hypothesis - the dispersion of the first sample is less than or equal to the dispersion of the second sample) The test is based on the following assumptions: * the given samples have normal distributions * the samples are independent. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - sample size. Y - sample 2. Array whose index goes from 0 to M-1. M - sample size. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 19.09.2006 by Bochkanov Sergey *************************************************************************/void ftest(const ap::real_1d_array& x, int n, const ap::real_1d_array& y, int m, double& bothtails, double& lefttail, double& righttail);
onesamplevariancetest
function/************************************************************************* One-sample chi-square test This test checks three hypotheses about the dispersion of the given sample The following tests are performed: * two-tailed test (null hypothesis - the dispersion equals the given number) * left-tailed test (null hypothesis - the dispersion is greater than or equal to the given number) * right-tailed test (null hypothesis - dispersion is less than or equal to the given number). Test is based on the following assumptions: * the given sample has a normal distribution. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. Variance - dispersion value to compare with. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 19.09.2006 by Bochkanov Sergey *************************************************************************/void onesamplevariancetest(const ap::real_1d_array& x, int n, double variance, double& bothtails, double& lefttail, double& righttail);
wsr
unitwilcoxonsignedranktest
function/************************************************************************* Wilcoxon signed-rank test This test checks three hypotheses about the median of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the median is equal to the given value) * left-tailed test (null hypothesis - the median is greater than or equal to the given value) * right-tailed test (null hypothesis - the median is less than or equal to the given value) Requirements: * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). * the distribution should be continuous and symmetric relative to its median. * number of distinct values in the X array should be greater than 4 The test is non-parametric and doesn't require distribution X to be normal Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. Median - assumed median value. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with two decimal places in interval [0.0001, 1]. "Two decimal places" does not sound very impressive, but in practice the relative error of less than 1% is enough to make a decision. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/void wilcoxonsignedranktest(ap::real_1d_array x, int n, double e, double& bothtails, double& lefttail, double& righttail);