Skip to content

Commit

Permalink
major changes in docs, GKL iterator and svdsolve based thereon
Browse files Browse the repository at this point in the history
  • Loading branch information
Jutho committed Aug 31, 2018
1 parent 0ff261b commit 7e61fa0
Show file tree
Hide file tree
Showing 29 changed files with 935 additions and 239 deletions.
2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,3 @@
*.jl.*.cov
*.jl.mem
.DS_Store
docs/build/
docs/site/
2 changes: 2 additions & 0 deletions docs/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
build/
site/
28 changes: 22 additions & 6 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,13 +1,29 @@
using Documenter, KrylovKit
using Documenter
using KrylovKit

makedocs(modules=[KrylovKit],
format=:html,
sitename="KrylovKit.jl",
pages = [
"Home" => "index.md",
"Linear systems and least square problems" => "linear.md",
"Eigenvalues and singular values" => "eigsvd.md",
"Matrix functions" => "matfun.md",
"Available algorithms" => "algorithms.md",
"Implementation" => "implementation.md"
"Manual" => [
"man/linear.md",
"man/eig.md",
"man/svd.md",
"man/matfun.md",
"man/algorithms.md",
"man/implementation.md"
]
# "Linear systems" => "linear.md",
# "Eigenvalues and singular values" => "eigsvd.md",
# "Matrix functions" => "matfun.md",
# "Available algorithms" => "algorithms.md",
# "Implementation" => "implementation.md"
])

# Documenter can also automatically deploy documentation to gh-pages.
# See "Hosting Documentation" and deploydocs() in the Documenter manual
# for more information.
#=deploydocs(
repo = "<repository url>"
)=#
27 changes: 27 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# See the mkdocs user guide for more information on these settings.
# http://www.mkdocs.org/user-guide/configuration/

site_name: KrylovKit.jl
#repo_url: https://github.com/USER_NAME/PACKAGE_NAME.jl
#site_description: Description...
#site_author: USER_NAME

theme: readthedocs

extra_css:
- assets/Documenter.css

extra_javascript:
- https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_HTML
- assets/mathjaxhelper.js

markdown_extensions:
- extra
- tables
- fenced_code
- mdx_math

docs_dir: 'build'

pages:
- Home: index.md
5 changes: 0 additions & 5 deletions docs/src/algorithms.md

This file was deleted.

15 changes: 0 additions & 15 deletions docs/src/eigsvd.md

This file was deleted.

5 changes: 0 additions & 5 deletions docs/src/implementation.md

This file was deleted.

55 changes: 29 additions & 26 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,29 @@
# KrylovKit.jl Documention
# KrylovKit.jl

`KrylovKit.jl` is a Julia package that collects a number of Krylov-based algorithms for linear
problems, singular value and eigenvalue problems and the application of functions of linear
maps or operators to vectors.
A Julia package collecting a number of Krylov-based algorithms for linear problems, singular
value and eigenvalue problems and the application of functions of linear maps or operators
to vectors.

## Contens
## Package features
`KrylovKit.jl` accepts general functions or callable objects as linear maps, and general Julia
objects with vector like behavior (see below) as vectors.

The high level interface of KrylovKit is provided by the following functions:
* [`linsolve`](@ref): solve linear systems
* [`eigsolve`](@ref): find a few eigenvalues and corresponding eigenvectors
* [`svdsolve`](@ref): find a few singular values and corresponding left and right singular vectors
* [`exponentiate`](@ref): apply the exponential of a linear map to a vector

## Manual outline

```@contents
Pages = ["man/linear.md","man/eigsvd.md","man/matfun.md","man/algorithms.md","man/implementation.md"]
```

## Defining features
There are a number of packages with Krylov-based or other iterative methods, such as
## Comparison to other packages
This section could also be titled "Why did I create KrylovKit.jl"?

There are already a fair number of packages with Krylov-based or other iterative methods, such as
* [`IterativeSolvers.jl`](https://github.com/JuliaMath/IterativeSolvers.jl): part of the
[`JuliaMath`](https://github.com/JuliaMath) organisation, solves linear systems and least
square problems, eigenvalue and singular value problems
Expand All @@ -21,21 +34,14 @@ There are a number of packages with Krylov-based or other iterative methods, suc
* [`KrylovMethods.jl`](https://github.com/lruthotto/KrylovMethods.jl): specific for sparse matrices
* [`Expokit.jl`](https://github.com/acroy/Expokit.jl): application of the matrix exponential to a vector

`KrylovKit.jl` distinguishes itself from the previous packages the following two ways
`KrylovKit.jl` distinguishes itself from the previous packages in the following two ways

1. `KrylovKit` accepts general functions `f` to represent the linear map or operator that defines
1. `KrylovKit` accepts general functions to represent the linear map or operator that defines
the problem, without having to wrap them in a [`LinearMap`](https://github.com/Jutho/LinearMaps.jl)
or [`LinearOperator`](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl) type.
Of course, subtypes of `AbstractMatrix` are also supported. The linear map is always the
first argument in `linsolve(f,...)`, `eigsolve(f,...)`, `svdsolve(f,...)`, `exponentiate(f,...)`
so that Julia's `do` block construction can be used, e.g.
```julia
linsolve(...) do x
# some linear operation on x
end
```
If the first argument `f isa AbstractMatrix`, matrix vector multiplication is used, otherwise
`f` is called as a function.
Of course, subtypes of `AbstractMatrix` are also supported. If the linear map (always the first
argument) is a subtype of `AbstractMatrix`, matrix vector multiplication is used, otherwise
is applied as a function call.

2. `KrylovKit` does not assume that the vectors involved in the problem are actual subtypes of
`AbstractVector`. Any Julia object that behaves as a vector (in the way defined below) is
Expand Down Expand Up @@ -68,13 +74,10 @@ There are a number of packages with Krylov-based or other iterative methods, suc
certain type of preconditioners and solving generalized eigenvalue problems with a positive
definite matrix in the right hand side.


## Current functionality

The following algorithms are currently implemented
* `orthogonalization`: Classical & Modified Gram Schmidt, possibly with a second round or
an adaptive number of rounds of reorthogonalization.
* `linsolve`: [`GMRES`](@ref) without preconditioning
* `linsolve`: [`GMRES`](@ref)
* `eigsolve`: a Krylov-Schur algorithm (i.e. with tick restarts) for extremal eigenvalues of
normal (i.e. not generalized) eigenvalue problems, corresponding to [`Lanczos`](@ref) for
real symmetric or complex hermitian linear maps, and to [`Arnoldi`](@ref) for general linear maps.
Expand All @@ -85,18 +88,18 @@ The following algorithms are currently implemented

## Future functionality?

Below is a wish list / to-do list for the future. Any help is welcomed and appreciated.
Here follows a wish list / to-do list for the future. Any help is welcomed and appreciated.

* More algorithms, including biorthogonal methods:
- for `linsolve`: CG, MINRES, BiCG, BiCGStab, ...
- for `eigsolve`: BiLanczos, Jacobi-Davidson (?), subspace iteration (?), ...
- for `svdsolve`: Golub-Kahan-Lanczos
- for `exponentiate`: Arnoldi (currently only Lanczos supported)
* Generalized eigenvalue problems: LOPCG, EIGFP and trace minimization
* Generalized eigenvalue problems: Rayleigh quotient / trace minimization, LOPCG, EIGFP
* Least square problems
* Nonlinear eigenvalue problems
* Preconditioners
* Harmonic ritz values
* Refined Ritz vectors, Harmonic ritz values and vectors
* Support both in-place / mutating and out-of-place functions as linear maps
* Reuse memory for storing vectors when restarting algorithms
* Improved efficiency for the specific case where `x` is `Vector` (i.e. BLAS level 2 operations)
Expand Down
4 changes: 0 additions & 4 deletions docs/src/linear.md

This file was deleted.

27 changes: 27 additions & 0 deletions docs/src/man/algorithms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Available algorithms

## Orthogonalization algorithms
```@docs
ClassicalGramSchmidt
ModifiedGramSchmidt
ClassicalGramSchmidt2
ModifiedGramSchmidt2
ClassicalGramSchmidtIR
ModifiedGramSchmidtIR
```

## General Krylov algorithms
```@docs
Lanczos
Arnoldi
```

## Specific algorithms for linear systems
```@docs
GMRES
```

## Default values
```@docs
KrylovDefaults
```
28 changes: 28 additions & 0 deletions docs/src/man/eig.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Finding eigenvalues and eigenvectors
Finding a selection of eigenvalues and corresponding (right) eigenvectors of a linear map can
be accomplished with the `eigsolve` routine:
```@docs
eigsolve
```

For a general matrix, eigenvalues and eigenvectors will always be returned with complex values
for reasons of type stability. However, if the linear map and initial guess are real, most of
the computation is actually performed using real arithmetic, as in fact the first step is to
compute an approximate partial Schur factorization. If one is not interested in the eigenvectors,
one can also just compute this partial Schur factorization using `schursolve`.
```@docs
schursolve
```
Note that, for symmetric or hermitian linear maps, the eigenvalue and Schur factorizaion are
equivalent, and one can only use `eigsolve`.

Another example of a possible use case of `schursolve` is if the linear map is known to have
a unique eigenvalue of, e.g. largest magnitude. Then, if the linear map is real valued, that
largest magnitude eigenvalue and its corresponding eigenvector are also real valued. `eigsolve`
will automatically return complex valued eigenvectors for reasons of type stability. However,
as the first Schur vector will coincide with the first eigenvector, one can instead use
```julia
T, vecs, vals, info = schursolve(A, x⁠₀, 1, :LM, Arnoldi(...))
```
and use `vecs[1]` as the real valued eigenvector (after checking `info.converged`) corresponding
to the largest magnitude eigenvalue of `A`.
39 changes: 39 additions & 0 deletions docs/src/man/implementation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Details of the implementation

## Krylov factorizations
The central ingredient in a Krylov based algorithm is a Krylov factorization or decomposition
of a linear map. Such partial factorizations are represented as a `KrylovFactorization`, of
which `LanczosFactorization` and `ArnoldiFactorization` are two concrete implementations:
```@docs
KrylovKit.KrylovFactorization
```
A `KrylovFactorization` can be destructered into its defining components using iteration, but
these can also be accessed using the following functions
```@docs
basis
rayleighquotient
residual
normres
rayleighextension
```

## Krylov iterators
Given a linear map ``A`` and a starting vector ``x₀``, a Krylov factorization is obtained by sequentially
building a Krylov subspace ``{x₀, A x₀, A² x₀, ...}``. Rather then using this set of vectors
as a basis, an orthonormal basis is generated by a process known as Lanczos or Arnoldi iteration
(for symmetric/hermitian and for general matrices, respectively). These processes are represented
as iterators in Julia:
```@docs
KrylovKit.KrylovIterator
```
The following functions allow to manipulate a `KrylovFactorization` obtained from such a
`KrylovIterator`:

```@docs
expand!
shrink!
initialize!
initialize
```

## Orthogonalization
32 changes: 32 additions & 0 deletions docs/src/man/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
## Introduction
The high level interface of KrylovKit is provided by the following functions:
* [`linsolve`](@ref): solve linear systems
* [`eigsolve`](@ref): find a few eigenvalues and corresponding eigenvectors
* [`svdsolve`](@ref): find a few singular values and corresponding left and right singular vectors
* [`exponentiate`](@ref): apply the exponential of a linear map to a vector

They all follow the standard format
```julia
results..., info = problemsolver(A, args...; kwargs...)
```
where `problemsolver` is one of the functions above. Here, `A` is the linear map in the problem,
which could be an instance of `AbstractMatrix`, or any function or callable object that encodes
the action of the linear map on a vector. In particular, one can write the linear map using
Julia's `do` block syntax as
```julia
results..., info = method(args...; kwargs...) do x
# implement linear map on x
end
```
Furthermore, `args` is a set of additional arguments to specify the problem. The keyword arguments
`kwargs` contain information about the linear map (`issymmetric`, `ishermitian`, `isposdef`) and
about the solution strategy (`tol`, `krylovdim`, `maxiter`). A suitable algorithm for the problem
is then chosen. The return value contains one or more entries that define the solution, and a final
entry `info` of type [`ConvergeInfo`](@ref) that encodes information about the solution, i.e. wether it
has converged, the residual(s) and the norm thereof, the number of operations used.

There is also an expert interface where the user specifies the algorithm that should be used
explicitly, i.e.
```julia
results..., info = method(A, args..., algorithm)
```
14 changes: 14 additions & 0 deletions docs/src/man/linear.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Solving linear systems

Linear systems are of the form `A*x=b` where `A` should be a linear map that has the same type
of output as input, i.e. the solution `x` should be of the same type as the right hand side `b`.
They can be solved using the function `linsolve`:

```@docs
linsolve
```

Currently supported algorithms are
```@docs
GMRES
```
File renamed without changes.
4 changes: 4 additions & 0 deletions docs/src/man/svd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Finding singular values and singular vectors
```@docs
svdsolve
```
Loading

0 comments on commit 7e61fa0

Please sign in to comment.