licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.1.0 | 81fba898c4e80b8549f5348b74d08197a04afc85 | docs | 1037 | ## Element Property Databases
These are curated element database files from various materials property repositories. The original files can be found here:
https://github.com/anthony-wang/BestPractices/tree/master/notebooks/CBFV/cbfv/element_properties
taken from commit:
https://github.com/anthony-wang/BestPractices/commit/e297c67b4f9a1854b9b5c02ef8a85510f7e587fd
## Description
- `jarvis.csv` : [Joint Automated Repository for Various Integrated Simulations provided by U.S. National Institutes of Standards and Technologies.](https://jarvis.nist.gov/)
- `magpie.csv` : [Materials Agnostic Platform for Informatics and Exploration](https://bitbucket.org/wolverton/magpie/src/master/)
- `mat2vec.csv` : [Word embeddings capture latent knowledge from materials science](https://github.com/materialsintelligence/mat2vec)
- `oliynyk.csv` : Database from A. Oliynyk.
- `onehot.csv`: Simple one hot encoding scheme, i.e., diagonal elemental matrix.
- `random_200.csv`: 200 random elemental properties (I'm assuming, not sure from where) | CBFV | https://github.com/JuliaMatSci/CBFV.jl.git |
|
[
"MIT"
] | 0.1.0 | 81fba898c4e80b8549f5348b74d08197a04afc85 | docs | 134 | ```@meta
CurrentModule = CBFV
```
# CBFV
```@index
```
```@autodocs
Modules = [CBFV]
Private = false
Order = [:function,:type]
```
| CBFV | https://github.com/JuliaMatSci/CBFV.jl.git |
|
[
"MIT"
] | 0.1.0 | 81fba898c4e80b8549f5348b74d08197a04afc85 | docs | 1095 | # Examples
The example below uses the default `oliynyk` feature element database:
```@example
using DataFrames
using CBFV
d = DataFrame(:formula=>["Tc1V1","Cu1Dy1","Cd3N2"],:target=>[248.539,66.8444,91.5034])
generatefeatures(d)
```
now trying with the `jarvis` database:
```@example
using DataFrames #hide
using CBFV #hide
d = DataFrame(:formula=>["Tc1V1","Cu1Dy1","Cd3N2"],:target=>[248.539,66.8444,91.5034]) #hide
generatefeatures(d,elementdata="jarvis")
```
Another example:
```@example
using DataFrames
using CBFV
data = DataFrame("name"=>["Rb2Te","CdCl2","LaN"],"bandgap_eV"=>[1.88,3.51,1.12])
rename!(data,Dict("name"=>"formula","bandgap_eV"=>"target"))
features = generatefeatures(data)
```
Here is an example with an existing feature combined with the generated features:
```@example
using DataFrames
using CBFV
data = DataFrame(:formula=>["B2O3","Be1I2","Be1F3Li1"],
:temperature=>[1400.00,1200.0,1100.00],
:heat_capacity=>[89.115,134.306,192.464])
rename!(data,Dict(:heat_capacity=>:target))
features = generatefeatures(data,combine=true)
``` | CBFV | https://github.com/JuliaMatSci/CBFV.jl.git |
|
[
"MIT"
] | 0.1.0 | 81fba898c4e80b8549f5348b74d08197a04afc85 | docs | 4943 | # CBFV.jl : A simple composition-based feature vectorization utility in Julia
[](https://juliamatsci.github.io/CBFV.jl/stable) [](https://juliamatsci.github.io/CBFV.jl/dev) [](https://github.com/JuliaMatSci/CBFV.jl/actions) [](https://travis-ci.com/JuliaMatSci/CBFV.jl) [](https://codecov.io/gh/JuliaMatSci/CBFV.jl)
This is a Julia rewrite of the [python tool](https://github.com/kaaiian/CBFV) to create a composition-based feature vector representation for machine learning with materials science data. The ideas and methodology are discussed in the recent article:
>Wang, Anthony Yu-Tung; Murdock, Ryan J.; Kauwe, Steven K.; Oliynyk, Anton O.; Gurlo, Aleksander; Brgoch, Jakoah; Persson, Kristin A.; Sparks, Taylor D., [Machine Learning for Materials Scientists: An Introductory Guide toward Best Practices](https://doi.org/10.1021/acs.chemmater.0c01907), *Chemistry of Materials* **2020**, *32 (12)*: 4954–4965. DOI: [10.1021/acs.chemmater.0c01907](https://doi.org/10.1021/acs.chemmater.0c01907).
and the original python source code(s) can be found here:
- [https://github.com/anthony-wang/BestPractices/tree/master/notebooks/CBFV](https://github.com/anthony-wang/BestPractices/tree/master/notebooks/CBFV)
- [https://github.com/kaaiian/CBFV](https://github.com/kaaiian/CBFV)
## Example Use
The input data set should have a least two columns with the header/names `formula` and `target`.
```@example
using DataFrames
using CBFV
data = DataFrame("name"=>["Rb2Te","CdCl2","LaN"],"bandgap_eV"=>[1.88,3.51,1.12])
rename!(data,Dict("name"=>"formula","bandgap_eV"=>"target"))
features = generatefeatures(data)
```
The thing to note is you most likely will still want to post-process the generated feature data using some transformation to scale the data. The [StatsBase.jl](https://juliastats.org/StatsBase.jl/stable/transformations/) package provides some basic fetures for this, although the input needs to be `AbstractMatrix{<:Real}` rather than a `DataFrame`. This can be achieved using `generatefeatures(data,returndataframe=false)`
## Supported Featurization Schemes
As with the orignal CBFV python package the following element databases are available:
- `oliynyk` (default): Database from A. Oliynyk.
- `magpie`: [Materials Agnostic Platform for Informatics and Exploration](https://bitbucket.org/wolverton/magpie/src/master/)
- `mat2vec`: [Word embeddings capture latent knowledge from materials science](https://github.com/materialsintelligence/mat2vec)
- `jarvis`: [Joint Automated Repository for Various Integrated Simulations provided by U.S. National Institutes of Standards and Technologies.](https://jarvis.nist.gov/)
- `onehot`: Simple one hot encoding scheme, i.e., diagonal elemental matrix.
- `random_200`: 200 random elemental properties (I'm assuming).
However, `CBFV.jl` will allow you to provide your own element database to featurize with. Also, the current implementation reads the saved `.csv` file in [`databases`](@ref), however, this is prone to potential issues (ex. out of date files). To alleviate this I will change the implementation to utilize `Pkg.Artificats` with a `Artificats.toml` file that enables grabbing the datafiles needed from a server if they don't exist locally already.
### Julia Dependencies
This is a relatively small package so there aren't a lot of dependencies. The required packages are:
- CSV
- DataFrames
- ProgressBars
## Citations
Pleae cite the following when and if you use this package in your work:
```bibtex
@misc{CBFV.jl,
author = {Bringuier, Stefan},
year = {2021},
title = {CBFV.jl - A simple composition based feature vectorization Julia utility},
url = {https://github.com/JuliaMatSci/CBFV.jl},
}
```
In addition, please also consider citing the original python implementation and tutorial paper.
```bibtex
@misc{CBFV,
author = {Kauwe, Steven and Wang, Anthony Yu-Tung and Falkowski, Andrew},
title = {CBFV: Composition-based feature vectors},
url = {https://github.com/kaaiian/CBFV}
}
```
```bibtex
@article{Wang2020bestpractices,
author = {Wang, Anthony Yu-Tung and Murdock, Ryan J. and Kauwe, Steven K. and Oliynyk, Anton O. and Gurlo, Aleksander and Brgoch, Jakoah and Persson, Kristin A. and Sparks, Taylor D.},
year = {2020},
title = {Machine Learning for Materials Scientists: An Introductory Guide toward Best Practices},
url = {https://doi.org/10.1021/acs.chemmater.0c01907},
pages = {4954--4965},
volume = {32},
number = {12},
issn = {0897-4756},
journal = {Chemistry of Materials},
doi = {10.1021/acs.chemmater.0c01907}
}
``` | CBFV | https://github.com/JuliaMatSci/CBFV.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 808 | using Documenter, DataInterpolations
cp("./docs/Manifest.toml", "./docs/src/assets/Manifest.toml", force = true)
cp("./docs/Project.toml", "./docs/src/assets/Project.toml", force = true)
ENV["GKSwstype"] = "100"
makedocs(modules = [DataInterpolations],
sitename = "DataInterpolations.jl",
clean = true,
doctest = false,
linkcheck = true,
format = Documenter.HTML(assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/DataInterpolations/stable/"),
pages = ["index.md", "Methods" => "methods.md",
"Interface" => "interface.md", "Using with Symbolics/ModelingToolkit" => "symbolics.md",
"Manual" => "manual.md", "Inverting Integrals" => "inverting_integrals.md"])
deploydocs(repo = "github.com/SciML/DataInterpolations.jl"; push_preview = true)
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 4153 | module DataInterpolationsChainRulesCoreExt
if isdefined(Base, :get_extension)
using DataInterpolations: _interpolate, derivative, AbstractInterpolation,
LinearInterpolation, QuadraticInterpolation,
LagrangeInterpolation, AkimaInterpolation,
BSplineInterpolation, BSplineApprox, get_idx, get_parameters,
_quad_interp_indices, munge_data
using ChainRulesCore
else
using ..DataInterpolations: _interpolate, derivative, AbstractInterpolation,
LinearInterpolation, QuadraticInterpolation,
LagrangeInterpolation, AkimaInterpolation,
BSplineInterpolation, BSplineApprox, get_parameters,
_quad_interp_indices, munge_data
using ..ChainRulesCore
end
function ChainRulesCore.rrule(::typeof(munge_data), u, t)
u_out, t_out = munge_data(u, t)
# For now modifications by munge_data not supported
@assert (u == u_out && t == t_out)
munge_data_pullback = Δ -> (NoTangent(), Δ[1], Δ[2])
(u_out, t_out), munge_data_pullback
end
function ChainRulesCore.rrule(
::Type{LinearInterpolation}, u, t, I, p, extrapolate, cache_parameters)
A = LinearInterpolation(u, t, I, p, extrapolate, cache_parameters)
function LinearInterpolation_pullback(ΔA)
df = NoTangent()
du = ΔA.u
dt = NoTangent()
dI = NoTangent()
dp = NoTangent()
dextrapolate = NoTangent()
dcache_parameters = NoTangent()
df, du, dt, dI, dp, dextrapolate, dcache_parameters
end
A, LinearInterpolation_pullback
end
function ChainRulesCore.rrule(
::Type{QuadraticInterpolation}, u, t, I, p, mode, extrapolate, cache_parameters)
A = QuadraticInterpolation(u, t, I, p, mode, extrapolate, cache_parameters)
function LinearInterpolation_pullback(ΔA)
df = NoTangent()
du = ΔA.u
dt = NoTangent()
dI = NoTangent()
dp = NoTangent()
dmode = NoTangent()
dextrapolate = NoTangent()
dcache_parameters = NoTangent()
df, du, dt, dI, dp, dmode, dextrapolate, dcache_parameters
end
A, LinearInterpolation_pullback
end
function u_tangent(A::LinearInterpolation, t, Δ)
out = zero.(A.u)
idx = get_idx(A, t, A.iguesser)
t_factor = (t - A.t[idx]) / (A.t[idx + 1] - A.t[idx])
if eltype(out) <: Number
out[idx] = Δ * (one(eltype(out)) - t_factor)
out[idx + 1] = Δ * t_factor
else
@. out[idx] = Δ * (true - t_factor)
@. out[idx + 1] = Δ * t_factor
end
out
end
function u_tangent(A::QuadraticInterpolation, t, Δ)
out = zero.(A.u)
i₀, i₁, i₂ = _quad_interp_indices(A, t, A.iguesser)
t₀ = A.t[i₀]
t₁ = A.t[i₁]
t₂ = A.t[i₂]
Δt₀ = t₁ - t₀
Δt₁ = t₂ - t₁
Δt₂ = t₂ - t₀
if eltype(out) <: Number
out[i₀] = Δ * (t - A.t[i₁]) * (t - A.t[i₂]) / (Δt₀ * Δt₂)
out[i₁] = -Δ * (t - A.t[i₀]) * (t - A.t[i₂]) / (Δt₀ * Δt₁)
out[i₂] = Δ * (t - A.t[i₀]) * (t - A.t[i₁]) / (Δt₂ * Δt₁)
else
@. out[i₀] = Δ * (t - A.t[i₁]) * (t - A.t[i₂]) / (Δt₀ * Δt₂)
@. out[i₁] = -Δ * (t - A.t[i₀]) * (t - A.t[i₂]) / (Δt₀ * Δt₁)
@. out[i₂] = Δ * (t - A.t[i₀]) * (t - A.t[i₁]) / (Δt₂ * Δt₁)
end
out
end
function u_tangent(A, t, Δ)
NoTangent()
end
function ChainRulesCore.rrule(::typeof(_interpolate),
A::Union{
LinearInterpolation,
QuadraticInterpolation,
LagrangeInterpolation,
AkimaInterpolation,
BSplineInterpolation,
BSplineApprox
},
t::Number)
deriv = derivative(A, t)
function interpolate_pullback(Δ)
(NoTangent(), Tangent{typeof(A)}(; u = u_tangent(A, t, Δ)), sum(deriv .* Δ))
end
return _interpolate(A, t), interpolate_pullback
end
function ChainRulesCore.frule((_, _, Δt), ::typeof(_interpolate), A::AbstractInterpolation,
t::Number)
return _interpolate(A, t), derivative(A, t) * Δt
end
end # module
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 2356 | module DataInterpolationsOptimExt
using DataInterpolations
import DataInterpolations: munge_data,
Curvefit, CurvefitCache, _interpolate, get_show, derivative,
ExtrapolationError,
integral, IntegralNotFoundError, DerivativeNotFoundError
isdefined(Base, :get_extension) ? (using Optim, ForwardDiff) :
(using ..Optim, ..ForwardDiff)
### Curvefit
function Curvefit(u,
t,
model,
p0,
alg,
box = false,
lb = nothing,
ub = nothing;
extrapolate = false)
u, t = munge_data(u, t)
errfun(t, u, p) = sum(abs2.(u .- model(t, p)))
if box == false
mfit = optimize(p -> errfun(t, u, p), p0, alg)
else
if lb === nothing || ub === nothing
error("lower or upper bound should not be nothing")
end
od = OnceDifferentiable(p -> errfun(t, u, p), p0, autodiff = :finite)
mfit = optimize(od, lb, ub, p0, Fminbox(alg))
end
pmin = Optim.minimizer(mfit)
CurvefitCache(u, t, model, p0, ub, lb, alg, pmin, extrapolate)
end
# Curvefit
function _interpolate(A::CurvefitCache{<:AbstractVector{<:Number}},
t::Union{AbstractVector{<:Number}, Number})
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) &&
throw(ExtrapolationError())
A.m(t, A.pmin)
end
function _interpolate(A::CurvefitCache{<:AbstractVector{<:Number}},
t::Union{AbstractVector{<:Number}, Number},
i)
_interpolate(A, t), i
end
function derivative(A::CurvefitCache{<:AbstractVector{<:Number}},
t::Union{AbstractVector{<:Number}, Number}, order = 1)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
order > 2 && throw(DerivativeNotFoundError())
order == 1 && return ForwardDiff.derivative(x -> A.m(x, A.pmin), t)
return ForwardDiff.derivative(t -> ForwardDiff.derivative(x -> A.m(x, A.pmin), t), t)
end
function get_show(A::CurvefitCache)
return "Curvefit" *
" with $(length(A.t)) points, using $(nameof(typeof(A.alg)))\n"
end
function integral(A::CurvefitCache{<:AbstractVector{<:Number}}, t::Number)
throw(IntegralNotFoundError())
end
function integral(A::CurvefitCache{<:AbstractVector{<:Number}}, t1::Number, t2::Number)
throw(IntegralNotFoundError())
end
end # module
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 11684 | module DataInterpolationsRegularizationToolsExt
using DataInterpolations
import DataInterpolations: munge_data,
_interpolate, RegularizationSmooth, get_show, derivative,
integral
using LinearAlgebra
isdefined(Base, :get_extension) ? (import RegularizationTools as RT) :
(import ..RegularizationTools as RT)
# TODO:
# x midpoint rule
# x scattered/interpolation
# x GCV
# x L-curve
# - bounds on λ
# - initial guess for λ? will require mods to RegularizationTools
# - scaled λ? will need to work out equivalency with λ² formulation, and resolve with
# derivative rather than difference matrix
# - optimize λ via standard deviation?
# - relative weights?
# - arbitrary weighting -- implemented but not yet tested
# - midpoint rule with scattered?
# x add argument types for `RegularizationSmooth` constructor methods (why isn't this done
# for the other interpolaters?)
# - make use of `munge_data` features (allow for matrix rather than vector u & t arguments?)
# - validate data and t̂
# x unit tests
const LA = LinearAlgebra
"""
# Arguments
- `u::Vector`: dependent data.
- `t::Vector`: independent data.
# Optional Arguments
- `t̂::Vector`: t-values to use for the smooth curve (useful when data has missing values or
is "scattered"); if not provided, then `t̂ = t`; must be monotonically
increasing.
- `wls::{Vector,Symbol}`: weights to use with the least-squares fitting term; if set to
`:midpoint`, then midpoint-rule integration weights are used for
_both_ `wls` and `wr`.
- `wr::Vector`: weights to use with the roughness term.
- `d::Int = 2`: derivative used to calculate roughness; e.g., when `d = 2`, the 2nd
derivative (i.e. the curvature) of the data is used to calculate roughness.
# Keyword Arguments
- `λ::{Number,Tuple} = 1.0`: regularization parameter; larger values result in a smoother
curve; the provided value is used directly when `alg = :fixed`;
otherwise it is used as an initial guess for the optimization
method, or as bounds if a 2-tuple is provided (TBD)
- `alg::Symbol = :gcv_svd`: algorithm for determining an optimal value for λ; the provided λ
value is used directly if `alg = :fixed`; otherwise `alg = [:gcv_svd, :gcv_tr, :L_curve]` is passed to the
RegularizationTools solver.
- `extrapolate::Bool` = false: flag to allow extrapolating outside the range of the time points provided.
## Example Constructors
Smoothing using all arguments
```julia
A = RegularizationSmooth(u, t, t̂, wls, wr, d; λ = 1.0, alg = :gcv_svd)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::AbstractVector,
wls::AbstractVector, wr::AbstractVector, d::Int = 2;
λ::Real = 1.0, alg::Symbol = :gcv_svd, extrapolate::Bool = false)
u, t = munge_data(u, t)
M = _mapping_matrix(t̂, t)
Wls½ = LA.diagm(sqrt.(wls))
Wr½ = LA.diagm(sqrt.(wr))
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u, û, t, t̂, wls, wr, d, λ, alg, Aitp, extrapolate)
end
"""
Direct smoothing, no `t̂` or weights
```julia
A = RegularizationSmooth(u, t, d; λ = 1.0, alg = :gcv_svd, extrapolate = false)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, d::Int = 2;
λ::Real = 1.0,
alg::Symbol = :gcv_svd, extrapolate::Bool = false)
u, t = munge_data(u, t)
t̂ = t
N = length(t)
M = Array{Float64}(LA.I, N, N)
Wls½ = Array{Float64}(LA.I, N, N)
Wr½ = Array{Float64}(LA.I, N - d, N - d)
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u,
û,
t,
t̂,
LA.diag(Wls½),
LA.diag(Wr½),
d,
λ,
alg,
Aitp,
extrapolate)
end
"""
`t̂` provided, no weights
```julia
A = RegularizationSmooth(u, t, t̂, d; λ = 1.0, alg = :gcv_svd, extrapolate = false)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::AbstractVector,
d::Int = 2; λ::Real = 1.0, alg::Symbol = :gcv_svd,
extrapolate::Bool = false)
u, t = munge_data(u, t)
N, N̂ = length(t), length(t̂)
M = _mapping_matrix(t̂, t)
Wls½ = Array{Float64}(LA.I, N, N)
Wr½ = Array{Float64}(LA.I, N̂ - d, N̂ - d)
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u,
û,
t,
t̂,
LA.diag(Wls½),
LA.diag(Wr½),
d,
λ,
alg,
Aitp,
extrapolate)
end
"""
`t̂` and `wls` provided
```julia
A = RegularizationSmooth(u, t, t̂, wls, d; λ = 1.0, alg = :gcv_svd, extrapolate = false)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::AbstractVector,
wls::AbstractVector, d::Int = 2; λ::Real = 1.0,
alg::Symbol = :gcv_svd, extrapolate::Bool = false)
u, t = munge_data(u, t)
N, N̂ = length(t), length(t̂)
M = _mapping_matrix(t̂, t)
Wls½ = LA.diagm(sqrt.(wls))
Wr½ = Array{Float64}(LA.I, N̂ - d, N̂ - d)
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u,
û,
t,
t̂,
wls,
LA.diag(Wr½),
d,
λ,
alg,
Aitp,
extrapolate)
end
"""
`wls` provided, no `t̂`
```julia
A = RegularizationSmooth(
u, t, nothing, wls, d; λ = 1.0, alg = :gcv_svd, extrapolate = false)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::Nothing,
wls::AbstractVector, d::Int = 2; λ::Real = 1.0,
alg::Symbol = :gcv_svd, extrapolate::Bool = false)
u, t = munge_data(u, t)
t̂ = t
N = length(t)
M = Array{Float64}(LA.I, N, N)
Wls½ = LA.diagm(sqrt.(wls))
Wr½ = Array{Float64}(LA.I, N - d, N - d)
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u,
û,
t,
t̂,
wls,
LA.diag(Wr½),
d,
λ,
alg,
Aitp,
extrapolate)
end
"""
`wls` and `wr` provided, no `t̂`
```julia
A = RegularizationSmooth(
u, t, nothing, wls, wr, d; λ = 1.0, alg = :gcv_svd, extrapolate = false)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::Nothing,
wls::AbstractVector, wr::AbstractVector, d::Int = 2;
λ::Real = 1.0, alg::Symbol = :gcv_svd, extrapolate::Bool = false)
u, t = munge_data(u, t)
t̂ = t
N = length(t)
M = Array{Float64}(LA.I, N, N)
Wls½ = LA.diagm(sqrt.(wls))
Wr½ = LA.diagm(sqrt.(wr))
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u,
û,
t,
t̂,
wls,
LA.diag(Wr½),
d,
λ,
alg,
Aitp,
extrapolate)
end
"""
Keyword provided for `wls`, no `t̂`
```julia
A = RegularizationSmooth(
u, t, nothing, :midpoint, d; λ = 1.0, alg = :gcv_svd, extrapolate = false)
```
"""
function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::Nothing,
wls::Symbol, d::Int = 2; λ::Real = 1.0, alg::Symbol = :gcv_svd,
extrapolate::Bool = false)
u, t = munge_data(u, t)
t̂ = t
N = length(t)
M = Array{Float64}(LA.I, N, N)
wls, wr = _weighting_by_kw(t, d, wls)
Wls½ = LA.diagm(sqrt.(wls))
Wr½ = LA.diagm(sqrt.(wr))
û, λ, Aitp = _reg_smooth_solve(u, t̂, d, M, Wls½, Wr½, λ, alg, extrapolate)
RegularizationSmooth(u,
û,
t,
t̂,
LA.diag(Wls½),
LA.diag(Wr½),
d,
λ,
alg,
Aitp,
extrapolate)
end
# """ t̂ provided and keyword for wls _TBD_ """
# function RegularizationSmooth(u::AbstractVector, t::AbstractVector, t̂::AbstractVector,
# wls::Symbol, d::Int=2; λ::Real=1.0, alg::Symbol=:gcv_svd)
"""
Solve for the smoothed dependent variables and create spline interpolator
"""
function _reg_smooth_solve(
u::AbstractVector, t̂::AbstractVector, d::Int, M::AbstractMatrix,
Wls½::AbstractMatrix, Wr½::AbstractMatrix, λ::Real, alg::Symbol, extrapolate::Bool)
λ = float(λ) # `float` expected by RT
D = _derivative_matrix(t̂, d)
Ψ = RT.setupRegularizationProblem(Wls½ * M, Wr½ * D)
Wls½u = Wls½ * u
if alg == :fixed
b̄ = RT.to_standard_form(Ψ, Wls½u) # via b̄
ū = RT.solve(Ψ, b̄, λ)
û = RT.to_general_form(Ψ, Wls½u, ū)
else
# the provided λ (a scalar) is used as an initial guess; using bounds for Brent()
# method is TBD, JJS 12/21/21
result = RT.solve(Ψ, Wls½u; alg = alg, method = RT.NelderMead(), λ₀ = λ)
û = result.x
λ = result.λ
end
Aitp = CubicSpline(û, t̂; extrapolate)
# It seems logical to use B-Spline of order d+1, but I am unsure if theory supports the
# extra computational cost, JJS 12/25/21
#Aitp = BSplineInterpolation(û,t̂,d+1,:ArcLen,:Average)
return û, λ, Aitp
end
"""
Order d derivative matrix for the provided t vector
"""
function _derivative_matrix(t::AbstractVector, d::Int)
N = length(t)
if d == 0
return Array{Float64}(LA.I, (N, N))
end
dt = t[(d + 1):end] - t[1:(end - d)]
V = LA.diagm(1 ./ dt)
Ddm1 = diff(_derivative_matrix(t, d - 1), dims = 1)
D = d * V * Ddm1
return D
end
"""
Linear interpolation mapping matrix, which maps `û` to `u`.
"""
function _mapping_matrix(t̂::AbstractVector, t::AbstractVector)
N = length(t)
N̂ = length(t̂)
# map the scattered points to the appropriate index of the smoothed points
idx = searchsortedlast.(Ref(t̂), t)
# allow for "extrapolation"; i.e. for t̂ extremum that are interior to t
idx[idx .== 0] .+= 1
idx[idx .== N̂] .+= -1
# create the linear interpolation matrix
m2 = @. (t - t̂[idx]) / (t̂[idx + 1] - t̂[idx])
M = zeros(eltype(t), (N, N̂))
for i in 1:N
M[i, idx[i]] = 1 - m2[i]
M[i, idx[i] + 1] = m2[i]
end
return M
end
"""
Common-use weighting, currently only `:midpoint` for midpoint-rule integration
"""
function _weighting_by_kw(t::AbstractVector, d::Int, wls::Symbol)
# `:midpoint` only for now, but plan to add functionality for `:relative` weighting
N = length(t)
if wls == :midpoint
bmp = zeros(N)
bmp[1] = -t[1] + t[2]
for i in 2:(N - 1)
bmp[i] = -t[i - 1] + t[i + 1]
end
bmp[N] = -t[N - 1] + t[N]
# divide by 2 doesn't matter in the minimize step, but keeping for correctness if
# used as a template elsewhere
bmp = bmp / 2
start = floor(Int, d / 2) + 1
final = iseven(d) ? N - (start - 1) : N - start
b̃mp = bmp[start:final]
return bmp, b̃mp
else
throw("Unknown `$(wls)` keyword used for weighting, use `:midpoint`.")
end
end
function _interpolate(A::RegularizationSmooth{
<:AbstractVector{<:Number},
},
t::Number)
_interpolate(A.Aitp, t)
end
function derivative(A::RegularizationSmooth{
<:AbstractVector{<:Number},
},
t::Number, order = 1)
derivative(A.Aitp, t, order)
end
function get_show(A::RegularizationSmooth)
return "RegularizationSmooth" *
" with $(length(A.t)) points, with regularization coefficient $(A.λ)\n"
end
function integral(A::RegularizationSmooth{<:AbstractVector{<:Number}}, t::Number)
integral(A.Aitp, t)
end
function integral(A::RegularizationSmooth{<:AbstractVector{<:Number}},
t1::Number,
t2::Number)
integral(A.Aitp, t1, t2)
end
end # module
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 1094 | module DataInterpolationsSymbolicsExt
if isdefined(Base, :get_extension)
using DataInterpolations: AbstractInterpolation
import DataInterpolations: derivative
using Symbolics
using Symbolics: Num, unwrap, SymbolicUtils
else
using ..DataInterpolations: AbstractInterpolation
import ..DataInterpolations: derivative
using ..Symbolics
using ..Symbolics: Num, unwrap, SymbolicUtils
end
@register_symbolic (interp::AbstractInterpolation)(t)
Base.nameof(interp::AbstractInterpolation) = :Interpolation
function derivative(interp::AbstractInterpolation, t::Num, order = 1)
Symbolics.wrap(SymbolicUtils.term(derivative, interp, unwrap(t), order))
end
SymbolicUtils.promote_symtype(::typeof(derivative), _...) = Real
function Symbolics.derivative(::typeof(derivative), args::NTuple{3, Any}, ::Val{2})
Symbolics.unwrap(derivative(args[1], Symbolics.wrap(args[2]), args[3] + 1))
end
function Symbolics.derivative(interp::AbstractInterpolation, args::NTuple{1, Any}, ::Val{1})
Symbolics.unwrap(derivative(interp, Symbolics.wrap(args[1])))
end
end # module
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 5230 | module DataInterpolations
### Interface Functionality
abstract type AbstractInterpolation{T} end
using LinearAlgebra, RecipesBase
using PrettyTables
using ForwardDiff
import FindFirstFunctions: searchsortedfirstcorrelated, searchsortedlastcorrelated,
Guesser
include("parameter_caches.jl")
include("interpolation_caches.jl")
include("interpolation_utils.jl")
include("interpolation_methods.jl")
include("plot_rec.jl")
include("derivatives.jl")
include("integrals.jl")
include("integral_inverses.jl")
include("online.jl")
include("show.jl")
(interp::AbstractInterpolation)(t::Number) = _interpolate(interp, t)
function (interp::AbstractInterpolation)(t::AbstractVector)
u = get_u(interp.u, t)
interp(u, t)
end
function get_u(u::AbstractVector, t)
return similar(t, promote_type(eltype(u), eltype(t)))
end
function get_u(u::AbstractVector{<:AbstractVector}, t)
type = promote_type(eltype(eltype(u)), eltype(t))
return [zeros(type, length(first(u))) for _ in eachindex(t)]
end
function get_u(u::AbstractMatrix, t)
type = promote_type(eltype(u), eltype(t))
return zeros(type, (size(u, 1), length(t)))
end
function (interp::AbstractInterpolation)(u::AbstractMatrix, t::AbstractVector)
@inbounds for i in eachindex(t)
u[:, i] = interp(t[i])
end
u
end
function (interp::AbstractInterpolation)(u::AbstractVector, t::AbstractVector)
@inbounds for i in eachindex(u, t)
u[i] = interp(t[i])
end
u
end
const EXTRAPOLATION_ERROR = "Cannot extrapolate as `extrapolate` keyword passed was `false`"
struct ExtrapolationError <: Exception end
function Base.showerror(io::IO, e::ExtrapolationError)
print(io, EXTRAPOLATION_ERROR)
end
const INTEGRAL_NOT_FOUND_ERROR = "Cannot integrate it analytically. Please use Numerical Integration methods."
struct IntegralNotFoundError <: Exception end
function Base.showerror(io::IO, e::IntegralNotFoundError)
print(io, INTEGRAL_NOT_FOUND_ERROR)
end
const DERIVATIVE_NOT_FOUND_ERROR = "Derivatives greater than second order is not supported."
struct DerivativeNotFoundError <: Exception end
function Base.showerror(io::IO, e::DerivativeNotFoundError)
print(io, DERIVATIVE_NOT_FOUND_ERROR)
end
const INTEGRAL_INVERSE_NOT_FOUND_ERROR = "Cannot invert the integral analytically. Please use Numerical methods."
struct IntegralInverseNotFoundError <: Exception end
function Base.showerror(io::IO, e::IntegralInverseNotFoundError)
print(io, INTEGRAL_INVERSE_NOT_FOUND_ERROR)
end
const INTEGRAL_NOT_INVERTIBLE_ERROR = "The Interpolation is not positive everywhere so its integral is not invertible."
struct IntegralNotInvertibleError <: Exception end
function Base.showerror(io::IO, e::IntegralNotInvertibleError)
print(io, INTEGRAL_NOT_INVERTIBLE_ERROR)
end
export LinearInterpolation, QuadraticInterpolation, LagrangeInterpolation,
AkimaInterpolation, ConstantInterpolation, QuadraticSpline, CubicSpline,
BSplineInterpolation, BSplineApprox, CubicHermiteSpline, PCHIPInterpolation,
QuinticHermiteSpline, LinearInterpolationIntInv, ConstantInterpolationIntInv
# added for RegularizationSmooth, JJS 11/27/21
### Regularization data smoothing and interpolation
struct RegularizationSmooth{uType, tType, T, T2, ITP <: AbstractInterpolation{T}} <:
AbstractInterpolation{T}
u::uType
û::uType
t::tType
t̂::tType
wls::uType
wr::uType
d::Int # derivative degree used to calculate the roughness
λ::T2 # regularization parameter
alg::Symbol # how to determine λ: `:fixed`, `:gcv_svd`, `:gcv_tr`, `L_curve`
Aitp::ITP
extrapolate::Bool
function RegularizationSmooth(u,
û,
t,
t̂,
wls,
wr,
d,
λ,
alg,
Aitp,
extrapolate)
new{typeof(u), typeof(t), eltype(u), typeof(λ), typeof(Aitp)}(
u,
û,
t,
t̂,
wls,
wr,
d,
λ,
alg,
Aitp,
extrapolate)
end
end
export RegularizationSmooth
# CurveFit
struct CurvefitCache{
uType,
tType,
mType,
p0Type,
ubType,
lbType,
algType,
pminType,
T
} <: AbstractInterpolation{T}
u::uType
t::tType
m::mType # model type
p0::p0Type # initial params
ub::ubType # upper bound of params
lb::lbType # lower bound of params
alg::algType # alg to optimize cost function
pmin::pminType # optimized params
extrapolate::Bool
function CurvefitCache(u, t, m, p0, ub, lb, alg, pmin, extrapolate)
new{typeof(u), typeof(t), typeof(m),
typeof(p0), typeof(ub), typeof(lb),
typeof(alg), typeof(pmin), eltype(u)}(u,
t,
m,
p0,
ub,
lb,
alg,
pmin,
extrapolate)
end
end
# Define an empty function, so that it can be extended via `DataInterpolationsOptimExt`
function Curvefit()
error("CurveFit requires loading Optim and ForwardDiff, e.g. `using Optim, ForwardDiff`")
end
export Curvefit
end # module
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 6729 | function derivative(A, t, order = 1)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
iguess = A.iguesser
return if order == 1
_derivative(A, t, iguess)
elseif order == 2
ForwardDiff.derivative(t -> begin
_derivative(A, t, iguess)
end, t)
else
throw(DerivativeNotFoundError())
end
end
function _derivative(A::LinearInterpolation, t::Number, iguess)
idx = get_idx(A, t, iguess; idx_shift = -1, ub_shift = -1, side = :first)
slope = get_parameters(A, idx)
slope
end
function _derivative(A::QuadraticInterpolation, t::Number, iguess)
i₀, i₁, i₂ = _quad_interp_indices(A, t, iguess)
l₀, l₁, l₂ = get_parameters(A, i₀)
du₀ = l₀ * (2t - A.t[i₁] - A.t[i₂])
du₁ = l₁ * (2t - A.t[i₀] - A.t[i₂])
du₂ = l₂ * (2t - A.t[i₀] - A.t[i₁])
return @views @. du₀ + du₁ + du₂
end
function _derivative(A::LagrangeInterpolation{<:AbstractVector}, t::Number)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
der = zero(A.u[1])
for j in eachindex(A.t)
tmp = zero(A.t[1])
if isnan(A.bcache[j])
mult = one(A.t[1])
for i in 1:(j - 1)
mult *= (A.t[j] - A.t[i])
end
for i in (j + 1):length(A.t)
mult *= (A.t[j] - A.t[i])
end
A.bcache[j] = mult
else
mult = A.bcache[j]
end
for l in eachindex(A.t)
if l != j
k = one(A.t[1])
for m in eachindex(A.t)
if m != j && m != l
k *= (t - A.t[m])
end
end
k *= inv(mult)
tmp += k
end
end
der += A.u[j] * tmp
end
der
end
function _derivative(A::LagrangeInterpolation{<:AbstractMatrix}, t::Number)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
der = zero(A.u[:, 1])
for j in eachindex(A.t)
tmp = zero(A.t[1])
if isnan(A.bcache[j])
mult = one(A.t[1])
for i in 1:(j - 1)
mult *= (A.t[j] - A.t[i])
end
for i in (j + 1):length(A.t)
mult *= (A.t[j] - A.t[i])
end
A.bcache[j] = mult
else
mult = A.bcache[j]
end
for l in eachindex(A.t)
if l != j
k = one(A.t[1])
for m in eachindex(A.t)
if m != j && m != l
k *= (t - A.t[m])
end
end
k *= inv(mult)
tmp += k
end
end
der += A.u[:, j] * tmp
end
der
end
function _derivative(A::LagrangeInterpolation{<:AbstractVector}, t::Number, idx)
_derivative(A, t)
end
function _derivative(A::LagrangeInterpolation{<:AbstractMatrix}, t::Number, idx)
_derivative(A, t)
end
function _derivative(A::AkimaInterpolation{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess; idx_shift = -1, side = :first)
j = min(idx, length(A.c)) # for smooth derivative at A.t[end]
wj = t - A.t[idx]
@evalpoly wj A.b[idx] 2A.c[j] 3A.d[j]
end
function _derivative(A::ConstantInterpolation, t::Number, iguess)
return zero(first(A.u))
end
function _derivative(A::ConstantInterpolation{<:AbstractVector}, t::Number, iguess)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
return isempty(searchsorted(A.t, t)) ? zero(A.u[1]) : eltype(A.u)(NaN)
end
function _derivative(A::ConstantInterpolation{<:AbstractMatrix}, t::Number, iguess)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
return isempty(searchsorted(A.t, t)) ? zero(A.u[:, 1]) : eltype(A.u)(NaN) .* A.u[:, 1]
end
# QuadraticSpline Interpolation
function _derivative(A::QuadraticSpline{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess; lb = 2, ub_shift = 0, side = :first)
σ = get_parameters(A, idx - 1)
A.z[idx - 1] + 2σ * (t - A.t[idx - 1])
end
# CubicSpline Interpolation
function _derivative(A::CubicSpline{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt₁ = t - A.t[idx]
Δt₂ = A.t[idx + 1] - t
dI = (-A.z[idx] * Δt₂^2 + A.z[idx + 1] * Δt₁^2) / (2A.h[idx + 1])
c₁, c₂ = get_parameters(A, idx)
dC = c₁
dD = -c₂
dI + dC + dD
end
function _derivative(A::BSplineInterpolation{<:AbstractVector{<:Number}}, t::Number, iguess)
# change t into param [0 1]
t < A.t[1] && return zero(A.u[1])
t > A.t[end] && return zero(A.u[end])
idx = get_idx(A, t, iguess)
n = length(A.t)
scale = (A.p[idx + 1] - A.p[idx]) / (A.t[idx + 1] - A.t[idx])
t_ = A.p[idx] + (t - A.t[idx]) * scale
N = t isa ForwardDiff.Dual ? zeros(eltype(t), n) : A.N
spline_coefficients!(N, A.d - 1, A.k, t_)
ducum = zero(eltype(A.u))
if t == A.t[1]
ducum = (A.c[2] - A.c[1]) / (A.k[A.d + 2])
else
for i in 1:(n - 1)
ducum += N[i + 1] * (A.c[i + 1] - A.c[i]) / (A.k[i + A.d + 1] - A.k[i + 1])
end
end
ducum * A.d * scale
end
# BSpline Curve Approx
function _derivative(A::BSplineApprox{<:AbstractVector{<:Number}}, t::Number, iguess)
# change t into param [0 1]
t < A.t[1] && return zero(A.u[1])
t > A.t[end] && return zero(A.u[end])
idx = get_idx(A, t, iguess)
scale = (A.p[idx + 1] - A.p[idx]) / (A.t[idx + 1] - A.t[idx])
t_ = A.p[idx] + (t - A.t[idx]) * scale
N = t isa ForwardDiff.Dual ? zeros(eltype(t), A.h) : A.N
spline_coefficients!(N, A.d - 1, A.k, t_)
ducum = zero(eltype(A.u))
if t == A.t[1]
ducum = (A.c[2] - A.c[1]) / (A.k[A.d + 2])
else
for i in 1:(A.h - 1)
ducum += N[i + 1] * (A.c[i + 1] - A.c[i]) / (A.k[i + A.d + 1] - A.k[i + 1])
end
end
ducum * A.d * scale
end
# Cubic Hermite Spline
function _derivative(
A::CubicHermiteSpline{<:AbstractVector{<:Number}}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt₀ = t - A.t[idx]
Δt₁ = t - A.t[idx + 1]
out = A.du[idx]
c₁, c₂ = get_parameters(A, idx)
out += Δt₀ * (Δt₀ * c₂ + 2(c₁ + Δt₁ * c₂))
out
end
# Quintic Hermite Spline
function _derivative(
A::QuinticHermiteSpline{<:AbstractVector{<:Number}}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt₀ = t - A.t[idx]
Δt₁ = t - A.t[idx + 1]
out = A.du[idx] + A.ddu[idx] * Δt₀
c₁, c₂, c₃ = get_parameters(A, idx)
out += Δt₀^2 *
(3c₁ + (3Δt₁ + Δt₀) * c₂ + (3Δt₁^2 + Δt₀ * 2Δt₁) * c₃)
out
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 3875 | abstract type AbstractIntegralInverseInterpolation{T} <: AbstractInterpolation{T} end
"""
invert_integral(A::AbstractInterpolation)::AbstractIntegralInverseInterpolation
Creates the inverted integral interpolation object from the given interpolation. Conditions:
- The range of `A` must be strictly positive
- `A.u` must be a number type (on which an ordering is defined)
- This is currently only supported for `ConstantInterpolation` and `LinearInterpolation`
## Arguments
- `A`: interpolation object satisfying the above requirements
"""
invert_integral(A::AbstractInterpolation) = throw(IntegralInverseNotFoundError())
_integral(A::AbstractIntegralInverseInterpolation, idx, t) = throw(IntegralNotFoundError())
function _derivative(A::AbstractIntegralInverseInterpolation, t::Number, iguess)
inv(A.itp(A(t)))
end
"""
LinearInterpolationIntInv(u, t, A)
It is the interpolation of the inverse of the integral of a `LinearInterpolation`.
Can be easily constructed with `invert_integral(A::LinearInterpolation{<:AbstractVector{<:Number}})`
## Arguments
- `u` : Given by `A.t`
- `t` : Given by `A.I` (the cumulative integral of `A`)
- `A` : The `LinearInterpolation` object
"""
struct LinearInterpolationIntInv{uType, tType, itpType, T} <:
AbstractIntegralInverseInterpolation{T}
u::uType
t::tType
extrapolate::Bool
iguesser::Guesser{tType}
itp::itpType
function LinearInterpolationIntInv(u, t, A)
new{typeof(u), typeof(t), typeof(A), eltype(u)}(
u, t, A.extrapolate, Guesser(t), A)
end
end
function invertible_integral(A::LinearInterpolation{<:AbstractVector{<:Number}})
return all(A.u .> 0)
end
get_I(A::AbstractInterpolation) = isempty(A.I) ? cumulative_integral(A, true) : A.I
function invert_integral(A::LinearInterpolation{<:AbstractVector{<:Number}})
!invertible_integral(A) && throw(IntegralNotInvertibleError())
return LinearInterpolationIntInv(A.t, get_I(A), A)
end
function _interpolate(
A::LinearInterpolationIntInv{<:AbstractVector{<:Number}}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt = t - A.t[idx]
x = A.itp.u[idx]
slope = get_parameters(A.itp, idx)
u = A.u[idx] + 2Δt / (x + sqrt(x^2 + slope * 2Δt))
u
end
"""
ConstantInterpolationIntInv(u, t, A)
It is the interpolation of the inverse of the integral of a `ConstantInterpolation`.
Can be easily constructed with `invert_integral(A::ConstantInterpolation{<:AbstractVector{<:Number}})`
## Arguments
- `u` : Given by `A.t`
- `t` : Given by `A.I` (the cumulative integral of `A`)
- `A` : The `ConstantInterpolation` object
"""
struct ConstantInterpolationIntInv{uType, tType, itpType, T} <:
AbstractIntegralInverseInterpolation{T}
u::uType
t::tType
extrapolate::Bool
iguesser::Guesser{tType}
itp::itpType
function ConstantInterpolationIntInv(u, t, A)
new{typeof(u), typeof(t), typeof(A), eltype(u)}(
u, t, A.extrapolate, Guesser(t), A
)
end
end
function invertible_integral(A::ConstantInterpolation{<:AbstractVector{<:Number}})
return all(A.u .> 0)
end
function invert_integral(A::ConstantInterpolation{<:AbstractVector{<:Number}})
!invertible_integral(A) && throw(IntegralNotInvertibleError())
return ConstantInterpolationIntInv(A.t, get_I(A), A)
end
function _interpolate(
A::ConstantInterpolationIntInv{<:AbstractVector{<:Number}}, t::Number, iguess)
idx = get_idx(A, t, iguess; ub_shift = 0)
if A.itp.dir === :left
# :left means that value to the left is used for interpolation
idx_ = get_idx(A, t, idx; lb = 1, ub_shift = 0)
else
# :right means that value to the right is used for interpolation
idx_ = get_idx(A, t, idx; side = :first, lb = 1, ub_shift = 0)
end
A.u[idx] + (t - A.t[idx]) / A.itp.u[idx_]
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 4211 | function integral(A::AbstractInterpolation, t::Number)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
integral(A, A.t[1], t)
end
function integral(A::AbstractInterpolation, t1::Number, t2::Number)
((t1 < A.t[1] || t1 > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
((t2 < A.t[1] || t2 > A.t[end]) && !A.extrapolate) && throw(ExtrapolationError())
!hasfield(typeof(A), :I) && throw(IntegralNotFoundError())
# the index less than or equal to t1
idx1 = get_idx(A, t1, 0)
# the index less than t2
idx2 = get_idx(A, t2, 0; idx_shift = -1, side = :first)
if A.cache_parameters
total = A.I[idx2] - A.I[idx1]
return if t1 == t2
zero(total)
else
total += _integral(A, idx1, A.t[idx1])
total -= _integral(A, idx1, t1)
total += _integral(A, idx2, t2)
total -= _integral(A, idx2, A.t[idx2])
total
end
else
total = zero(eltype(A.u))
for idx in idx1:idx2
lt1 = idx == idx1 ? t1 : A.t[idx]
lt2 = idx == idx2 ? t2 : A.t[idx + 1]
total += _integral(A, idx, lt2) - _integral(A, idx, lt1)
end
total
end
end
function _integral(A::LinearInterpolation{<:AbstractVector{<:Number}},
idx::Number,
t::Number)
Δt = t - A.t[idx]
slope = get_parameters(A, idx)
Δt * (A.u[idx] + slope * Δt / 2)
end
function _integral(
A::ConstantInterpolation{<:AbstractVector{<:Number}}, idx::Number, t::Number)
if A.dir === :left
# :left means that value to the left is used for interpolation
return A.u[idx] * t
else
# :right means that value to the right is used for interpolation
return A.u[idx + 1] * t
end
end
function _integral(A::QuadraticInterpolation{<:AbstractVector{<:Number}},
idx::Number,
t::Number)
A.mode == :Backward && idx > 1 && (idx -= 1)
idx = min(length(A.t) - 2, idx)
t₀ = A.t[idx]
t₁ = A.t[idx + 1]
t₂ = A.t[idx + 2]
t_sq = (t^2) / 3
l₀, l₁, l₂ = get_parameters(A, idx)
Iu₀ = l₀ * t * (t_sq - t * (t₁ + t₂) / 2 + t₁ * t₂)
Iu₁ = l₁ * t * (t_sq - t * (t₀ + t₂) / 2 + t₀ * t₂)
Iu₂ = l₂ * t * (t_sq - t * (t₀ + t₁) / 2 + t₀ * t₁)
return Iu₀ + Iu₁ + Iu₂
end
function _integral(A::QuadraticSpline{<:AbstractVector{<:Number}}, idx::Number, t::Number)
Cᵢ = A.u[idx]
Δt = t - A.t[idx]
σ = get_parameters(A, idx)
return A.z[idx] * Δt^2 / 2 + σ * Δt^3 / 3 + Cᵢ * Δt
end
function _integral(A::CubicSpline{<:AbstractVector{<:Number}}, idx::Number, t::Number)
Δt₁sq = (t - A.t[idx])^2 / 2
Δt₂sq = (A.t[idx + 1] - t)^2 / 2
II = (-A.z[idx] * Δt₂sq^2 + A.z[idx + 1] * Δt₁sq^2) / (6A.h[idx + 1])
c₁, c₂ = get_parameters(A, idx)
IC = c₁ * Δt₁sq
ID = -c₂ * Δt₂sq
II + IC + ID
end
function _integral(A::AkimaInterpolation{<:AbstractVector{<:Number}},
idx::Number,
t::Number)
t1 = A.t[idx]
A.u[idx] * (t - t1) + A.b[idx] * ((t - t1)^2 / 2) + A.c[idx] * ((t - t1)^3 / 3) +
A.d[idx] * ((t - t1)^4 / 4)
end
_integral(A::LagrangeInterpolation, idx::Number, t::Number) = throw(IntegralNotFoundError())
_integral(A::BSplineInterpolation, idx::Number, t::Number) = throw(IntegralNotFoundError())
_integral(A::BSplineApprox, idx::Number, t::Number) = throw(IntegralNotFoundError())
# Cubic Hermite Spline
function _integral(
A::CubicHermiteSpline{<:AbstractVector{<:Number}}, idx::Number, t::Number)
Δt₀ = t - A.t[idx]
Δt₁ = t - A.t[idx + 1]
out = Δt₀ * (A.u[idx] + Δt₀ * A.du[idx] / 2)
c₁, c₂ = get_parameters(A, idx)
p = c₁ + Δt₁ * c₂
dp = c₂
out += Δt₀^3 / 3 * (p - dp * Δt₀ / 4)
out
end
# Quintic Hermite Spline
function _integral(
A::QuinticHermiteSpline{<:AbstractVector{<:Number}}, idx::Number, t::Number)
Δt₀ = t - A.t[idx]
Δt₁ = t - A.t[idx + 1]
out = Δt₀ * (A.u[idx] + A.du[idx] * Δt₀ / 2 + A.ddu[idx] * Δt₀^2 / 6)
c₁, c₂, c₃ = get_parameters(A, idx)
p = c₁ + c₂ * Δt₁ + c₃ * Δt₁^2
dp = c₂ + 2c₃ * Δt₁
ddp = 2c₃
out += Δt₀^4 / 4 * (p - Δt₀ / 5 * dp + Δt₀^2 / 30 * ddp)
out
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 34603 | """
LinearInterpolation(u, t; extrapolate = false, cache_parameters = false)
It is the method of interpolating between the data points using a linear polynomial. For any point, two data points one each side are chosen and connected with a line.
Extrapolation extends the last linear polynomial on each side.
## Arguments
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation
computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct LinearInterpolation{uType, tType, IType, pType, T} <: AbstractInterpolation{T}
u::uType
t::tType
I::IType
p::LinearParameterCache{pType}
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function LinearInterpolation(u, t, I, p, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(p.slope), eltype(u)}(
u, t, I, p, extrapolate, Guesser(t), cache_parameters, linear_lookup)
end
end
function LinearInterpolation(
u, t; extrapolate = false, cache_parameters = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
p = LinearParameterCache(u, t, cache_parameters)
A = LinearInterpolation(
u, t, nothing, p, extrapolate, cache_parameters, assume_linear_t)
I = cumulative_integral(A, cache_parameters)
LinearInterpolation(u, t, I, p, extrapolate, cache_parameters, assume_linear_t)
end
"""
QuadraticInterpolation(u, t, mode = :Forward; extrapolate = false, cache_parameters = false)
It is the method of interpolating between the data points using quadratic polynomials. For any point, three data points nearby are taken to fit a quadratic polynomial.
Extrapolation extends the last quadratic polynomial on each side.
## Arguments
- `u`: data points.
- `t`: time points.
- `mode`: `:Forward` or `:Backward`. If `:Forward`, two data points ahead of the point and one data point behind is taken for interpolation. If `:Backward`, two data points behind and one ahead is taken for interpolation.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct QuadraticInterpolation{uType, tType, IType, pType, T} <: AbstractInterpolation{T}
u::uType
t::tType
I::IType
p::QuadraticParameterCache{pType}
mode::Symbol
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function QuadraticInterpolation(
u, t, I, p, mode, extrapolate, cache_parameters, assume_linear_t)
mode ∈ (:Forward, :Backward) ||
error("mode should be :Forward or :Backward for QuadraticInterpolation")
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(p.l₀), eltype(u)}(
u, t, I, p, mode, extrapolate, Guesser(t), cache_parameters, linear_lookup)
end
end
function QuadraticInterpolation(
u, t, mode; extrapolate = false, cache_parameters = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
linear_lookup = seems_linear(assume_linear_t, t)
p = QuadraticParameterCache(u, t, cache_parameters)
A = QuadraticInterpolation(
u, t, nothing, p, mode, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
QuadraticInterpolation(u, t, I, p, mode, extrapolate, cache_parameters, linear_lookup)
end
function QuadraticInterpolation(u, t; kwargs...)
QuadraticInterpolation(u, t, :Forward; kwargs...)
end
"""
LagrangeInterpolation(u, t, n = length(t) - 1; extrapolate = false, safetycopy = true)
It is the method of interpolation using Lagrange polynomials of (k-1)th order passing through all the data points where k is the number of data points.
## Arguments
- `u`: data points.
- `t`: time points.
- `n`: order of the polynomial. Currently only (k-1)th order where k is the number of data points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
"""
struct LagrangeInterpolation{uType, tType, T, bcacheType} <:
AbstractInterpolation{T}
u::uType
t::tType
n::Int
bcache::bcacheType
idxs::Vector{Int}
extrapolate::Bool
iguesser::Guesser{tType}
function LagrangeInterpolation(u, t, n, extrapolate)
bcache = zeros(eltype(u[1]), n + 1)
idxs = zeros(Int, n + 1)
fill!(bcache, NaN)
new{typeof(u), typeof(t), eltype(u), typeof(bcache)}(u,
t,
n,
bcache,
idxs,
extrapolate,
Guesser(t)
)
end
end
function LagrangeInterpolation(
u, t, n = length(t) - 1; extrapolate = false)
u, t = munge_data(u, t)
if n != length(t) - 1
error("Currently only n=length(t) - 1 is supported")
end
LagrangeInterpolation(u, t, n, extrapolate)
end
"""
AkimaInterpolation(u, t; extrapolate = false, cache_parameters = false)
It is a spline interpolation built from cubic polynomials. It forms a continuously differentiable function. For more details, refer: [https://en.wikipedia.org/wiki/Akima_spline](https://en.wikipedia.org/wiki/Akima_spline).
Extrapolation extends the last cubic polynomial on each side.
## Arguments
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct AkimaInterpolation{uType, tType, IType, bType, cType, dType, T} <:
AbstractInterpolation{T}
u::uType
t::tType
I::IType
b::bType
c::cType
d::dType
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function AkimaInterpolation(
u, t, I, b, c, d, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(b), typeof(c),
typeof(d), eltype(u)}(u,
t,
I,
b,
c,
d,
extrapolate,
Guesser(t),
cache_parameters,
linear_lookup
)
end
end
function AkimaInterpolation(
u, t; extrapolate = false, cache_parameters = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
linear_lookup = seems_linear(assume_linear_t, t)
n = length(t)
dt = diff(t)
m = Array{eltype(u)}(undef, n + 3)
m[3:(end - 2)] = diff(u) ./ dt
m[2] = 2m[3] - m[4]
m[1] = 2m[2] - m[3]
m[end - 1] = 2m[end - 2] - m[end - 3]
m[end] = 2m[end - 1] - m[end - 2]
b = 0.5 .* (m[4:end] .+ m[1:(end - 3)])
dm = abs.(diff(m))
f1 = dm[3:(n + 2)]
f2 = dm[1:n]
f12 = f1 + f2
ind = findall(f12 .> 1e-9 * maximum(f12))
b[ind] = (f1[ind] .* m[ind .+ 1] .+
f2[ind] .* m[ind .+ 2]) ./ f12[ind]
c = (3.0 .* m[3:(end - 2)] .- 2.0 .* b[1:(end - 1)] .- b[2:end]) ./ dt
d = (b[1:(end - 1)] .+ b[2:end] .- 2.0 .* m[3:(end - 2)]) ./ dt .^ 2
A = AkimaInterpolation(
u, t, nothing, b, c, d, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
AkimaInterpolation(u, t, I, b, c, d, extrapolate, cache_parameters, linear_lookup)
end
"""
ConstantInterpolation(u, t; dir = :left, extrapolate = false, cache_parameters = false)
It is the method of interpolating using a constant polynomial. For any point, two adjacent data points are found on either side (left and right). The value at that point depends on `dir`.
If it is `:left`, then the value at the left point is chosen and if it is `:right`, the value at the right point is chosen.
Extrapolation extends the last constant polynomial at the end points on each side.
## Arguments
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `dir`: indicates which value should be used for interpolation (`:left` or `:right`).
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct ConstantInterpolation{uType, tType, IType, T} <: AbstractInterpolation{T}
u::uType
t::tType
I::IType
p::Nothing
dir::Symbol # indicates if value to the $dir should be used for the interpolation
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function ConstantInterpolation(
u, t, I, dir, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), eltype(u)}(
u, t, I, nothing, dir, extrapolate, Guesser(t), cache_parameters, linear_lookup)
end
end
function ConstantInterpolation(
u, t; dir = :left, extrapolate = false,
cache_parameters = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
A = ConstantInterpolation(
u, t, nothing, dir, extrapolate, cache_parameters, assume_linear_t)
I = cumulative_integral(A, cache_parameters)
ConstantInterpolation(u, t, I, dir, extrapolate, cache_parameters, assume_linear_t)
end
"""
QuadraticSpline(u, t; extrapolate = false, cache_parameters = false)
It is a spline interpolation using piecewise quadratic polynomials between each pair of data points. Its first derivative is also continuous.
Extrapolation extends the last quadratic polynomial on each side.
## Arguments
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct QuadraticSpline{uType, tType, IType, pType, tAType, dType, zType, T} <:
AbstractInterpolation{T}
u::uType
t::tType
I::IType
p::QuadraticSplineParameterCache{pType}
tA::tAType
d::dType
z::zType
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function QuadraticSpline(
u, t, I, p, tA, d, z, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(p.σ), typeof(tA),
typeof(d), typeof(z), eltype(u)}(u,
t,
I,
p,
tA,
d,
z,
extrapolate,
Guesser(t),
cache_parameters,
linear_lookup
)
end
end
function QuadraticSpline(
u::uType, t; extrapolate = false,
cache_parameters = false, assume_linear_t = 1e-2) where {uType <:
AbstractVector{<:Number}}
u, t = munge_data(u, t)
linear_lookup = seems_linear(assume_linear_t, t)
s = length(t)
dl = ones(eltype(t), s - 1)
d_tmp = ones(eltype(t), s)
du = zeros(eltype(t), s - 1)
tA = Tridiagonal(dl, d_tmp, du)
# zero for element type of d, which we don't know yet
typed_zero = zero(2 // 1 * (u[begin + 1] - u[begin]) / (t[begin + 1] - t[begin]))
d = map(i -> i == 1 ? typed_zero : 2 // 1 * (u[i] - u[i - 1]) / (t[i] - t[i - 1]), 1:s)
z = tA \ d
p = QuadraticSplineParameterCache(z, t, cache_parameters)
A = QuadraticSpline(
u, t, nothing, p, tA, d, z, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
QuadraticSpline(u, t, I, p, tA, d, z, extrapolate, cache_parameters, linear_lookup)
end
function QuadraticSpline(
u::uType, t; extrapolate = false, cache_parameters = false,
assume_linear_t = 1e-2) where {uType <:
AbstractVector}
u, t = munge_data(u, t)
linear_lookup = seems_linear(assume_linear_t, t)
s = length(t)
dl = ones(eltype(t), s - 1)
d_tmp = ones(eltype(t), s)
du = zeros(eltype(t), s - 1)
tA = Tridiagonal(dl, d_tmp, du)
d_ = map(
i -> i == 1 ? zeros(eltype(t), size(u[1])) :
2 // 1 * (u[i] - u[i - 1]) / (t[i] - t[i - 1]),
1:s)
d = transpose(reshape(reduce(hcat, d_), :, s))
z_ = reshape(transpose(tA \ d), size(u[1])..., :)
z = [z_s for z_s in eachslice(z_, dims = ndims(z_))]
p = QuadraticSplineParameterCache(z, t, cache_parameters)
A = QuadraticSpline(
u, t, nothing, p, tA, d, z, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
QuadraticSpline(u, t, I, p, tA, d, z, extrapolate, cache_parameters, linear_lookup)
end
"""
CubicSpline(u, t; extrapolate = false, cache_parameters = false)
It is a spline interpolation using piecewise cubic polynomials between each pair of data points. Its first and second derivative is also continuous.
Second derivative on both ends are zero, which are also called "natural" boundary conditions. Extrapolation extends the last cubic polynomial on each side.
## Arguments
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct CubicSpline{uType, tType, IType, pType, hType, zType, T} <: AbstractInterpolation{T}
u::uType
t::tType
I::IType
p::CubicSplineParameterCache{pType}
h::hType
z::zType
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function CubicSpline(u, t, I, p, h, z, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(p.c₁), typeof(h), typeof(z), eltype(u)}(
u,
t,
I,
p,
h,
z,
extrapolate,
Guesser(t),
cache_parameters,
linear_lookup
)
end
end
function CubicSpline(u::uType,
t;
extrapolate = false, cache_parameters = false,
assume_linear_t = 1e-2) where {uType <:
AbstractVector{<:Number}}
u, t = munge_data(u, t)
n = length(t) - 1
h = vcat(0, map(k -> t[k + 1] - t[k], 1:(length(t) - 1)), 0)
dl = vcat(h[2:n], zero(eltype(h)))
d_tmp = 2 .* (h[1:(n + 1)] .+ h[2:(n + 2)])
du = vcat(zero(eltype(h)), h[3:(n + 1)])
tA = Tridiagonal(dl, d_tmp, du)
# zero for element type of d, which we don't know yet
typed_zero = zero(6(u[begin + 2] - u[begin + 1]) / h[begin + 2] -
6(u[begin + 1] - u[begin]) / h[begin + 1])
d = map(
i -> i == 1 || i == n + 1 ? typed_zero :
6(u[i + 1] - u[i]) / h[i + 1] - 6(u[i] - u[i - 1]) / h[i],
1:(n + 1))
z = tA \ d
linear_lookup = seems_linear(assume_linear_t, t)
p = CubicSplineParameterCache(u, h, z, cache_parameters)
A = CubicSpline(
u, t, nothing, p, h[1:(n + 1)], z, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
CubicSpline(u, t, I, p, h[1:(n + 1)], z, extrapolate, cache_parameters, linear_lookup)
end
function CubicSpline(
u::uType, t; extrapolate = false, cache_parameters = false,
assume_linear_t = 1e-2) where {uType <:
AbstractVector}
u, t = munge_data(u, t)
n = length(t) - 1
h = vcat(0, map(k -> t[k + 1] - t[k], 1:(length(t) - 1)), 0)
dl = vcat(h[2:n], zero(eltype(h)))
d_tmp = 2 .* (h[1:(n + 1)] .+ h[2:(n + 2)])
du = vcat(zero(eltype(h)), h[3:(n + 1)])
tA = Tridiagonal(dl, d_tmp, du)
d_ = map(
i -> i == 1 || i == n + 1 ? zeros(eltype(t), size(u[1])) :
6(u[i + 1] - u[i]) / h[i + 1] - 6(u[i] - u[i - 1]) / h[i],
1:(n + 1))
d = transpose(reshape(reduce(hcat, d_), :, n + 1))
z_ = reshape(transpose(tA \ d), size(u[1])..., :)
z = [z_s for z_s in eachslice(z_, dims = ndims(z_))]
p = CubicSplineParameterCache(u, h, z, cache_parameters)
A = CubicSpline(
u, t, nothing, p, h[1:(n + 1)], z, extrapolate, cache_parameters, assume_linear_t)
I = cumulative_integral(A, cache_parameters)
CubicSpline(u, t, I, p, h[1:(n + 1)], z, extrapolate, cache_parameters, assume_linear_t)
end
"""
BSplineInterpolation(u, t, d, pVecType, knotVecType; extrapolate = false, safetycopy = true)
It is a curve defined by the linear combination of `n` basis functions of degree `d` where `n` is the number of data points. For more information, refer [https://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/spline/B-spline/bspline-curve.html](https://pages.mtu.edu/%7Eshene/COURSES/cs3621/NOTES/spline/B-spline/bspline-curve.html).
Extrapolation is a constant polynomial of the end points on each side.
## Arguments
- `u`: data points.
- `t`: time points.
- `d`: degree of the piecewise polynomial.
- `pVecType`: symbol to parameters vector, `:Uniform` for uniform spaced parameters and `:ArcLen` for parameters generated by chord length method.
- `knotVecType`: symbol to knot vector, `:Uniform` for uniform knot vector, `:Average` for average spaced knot vector.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct BSplineInterpolation{uType, tType, pType, kType, cType, NType, T} <:
AbstractInterpolation{T}
u::uType
t::tType
d::Int # degree
p::pType # params vector
k::kType # knot vector
c::cType # control points
N::NType # Spline coefficients (preallocated memory)
pVecType::Symbol
knotVecType::Symbol
extrapolate::Bool
iguesser::Guesser{tType}
linear_lookup::Bool
function BSplineInterpolation(u,
t,
d,
p,
k,
c,
N,
pVecType,
knotVecType,
extrapolate,
assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(p), typeof(k), typeof(c), typeof(N), eltype(u)}(u,
t,
d,
p,
k,
c,
N,
pVecType,
knotVecType,
extrapolate,
Guesser(t),
linear_lookup
)
end
end
function BSplineInterpolation(
u, t, d, pVecType, knotVecType; extrapolate = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
n = length(t)
n < d + 1 && error("BSplineInterpolation needs at least d + 1, i.e. $(d+1) points.")
s = zero(eltype(u))
p = zero(t)
k = zeros(eltype(t), n + d + 1)
l = zeros(eltype(u), n - 1)
p[1] = zero(eltype(t))
p[end] = one(eltype(t))
for i in 2:n
s += √((t[i] - t[i - 1])^2 + (u[i] - u[i - 1])^2)
l[i - 1] = s
end
if pVecType == :Uniform
for i in 2:(n - 1)
p[i] = p[1] + (i - 1) * (p[end] - p[1]) / (n - 1)
end
elseif pVecType == :ArcLen
for i in 2:(n - 1)
p[i] = p[1] + l[i - 1] / s * (p[end] - p[1])
end
end
lidx = 1
ridx = length(k)
while lidx <= (d + 1) && ridx >= (length(k) - d)
k[lidx] = p[1]
k[ridx] = p[end]
lidx += 1
ridx -= 1
end
ps = zeros(eltype(t), n - 2)
s = zero(eltype(t))
for i in 2:(n - 1)
s += p[i]
ps[i - 1] = s
end
if knotVecType == :Uniform
# uniformly spaced knot vector
# this method is not recommended because, if it is used with the chord length method for global interpolation,
# the system of linear equations would be singular.
for i in (d + 2):n
k[i] = k[1] + (i - d - 1) // (n - d) * (k[end] - k[1])
end
elseif knotVecType == :Average
# average spaced knot vector
idx = 1
if d + 2 <= n
k[d + 2] = 1 // d * ps[d]
end
for i in (d + 3):n
k[i] = 1 // d * (ps[idx + d] - ps[idx])
idx += 1
end
end
# control points
N = zeros(eltype(t), n, n)
spline_coefficients!(N, d, k, p)
c = vec(N \ u[:, :])
N = zeros(eltype(t), n)
BSplineInterpolation(
u, t, d, p, k, c, N, pVecType, knotVecType, extrapolate, assume_linear_t)
end
"""
BSplineApprox(u, t, d, h, pVecType, knotVecType; extrapolate = false)
It is a regression based B-spline. The argument choices are the same as the `BSplineInterpolation`, with the additional parameter `h < length(t)` which is the number of control points to use, with smaller `h` indicating more smoothing.
For more information, refer [http://www.cad.zju.edu.cn/home/zhx/GM/009/00-bsia.pdf](http://www.cad.zju.edu.cn/home/zhx/GM/009/00-bsia.pdf).
Extrapolation is a constant polynomial of the end points on each side.
## Arguments
- `u`: data points.
- `t`: time points.
- `d`: degree of the piecewise polynomial.
- `h`: number of control points to use.
- `pVecType`: symbol to parameters vector, `:Uniform` for uniform spaced parameters and `:ArcLen` for parameters generated by chord length method.
- `knotVecType`: symbol to knot vector, `:Uniform` for uniform knot vector, `:Average` for average spaced knot vector.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct BSplineApprox{uType, tType, pType, kType, cType, NType, T} <:
AbstractInterpolation{T}
u::uType
t::tType
d::Int # degree
h::Int # number of control points (n => h >= d >= 1)
p::pType # params vector
k::kType # knot vector
c::cType # control points
N::NType # Spline coefficients (preallocated memory)
pVecType::Symbol
knotVecType::Symbol
extrapolate::Bool
iguesser::Guesser{tType}
linear_lookup::Bool
function BSplineApprox(u,
t,
d,
h,
p,
k,
c,
N,
pVecType,
knotVecType,
extrapolate,
assume_linear_t
)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(p), typeof(k), typeof(c), typeof(N), eltype(u)}(u,
t,
d,
h,
p,
k,
c,
N,
pVecType,
knotVecType,
extrapolate,
Guesser(t),
linear_lookup
)
end
end
function BSplineApprox(
u, t, d, h, pVecType, knotVecType; extrapolate = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
n = length(t)
h < d + 1 && error("BSplineApprox needs at least d + 1, i.e. $(d+1) control points.")
s = zero(eltype(u))
p = zero(t)
k = zeros(eltype(t), h + d + 1)
l = zeros(eltype(u), n - 1)
p[1] = zero(eltype(t))
p[end] = one(eltype(t))
for i in 2:n
s += √((t[i] - t[i - 1])^2 + (u[i] - u[i - 1])^2)
l[i - 1] = s
end
if pVecType == :Uniform
for i in 2:(n - 1)
p[i] = p[1] + (i - 1) * (p[end] - p[1]) / (n - 1)
end
elseif pVecType == :ArcLen
for i in 2:(n - 1)
p[i] = p[1] + l[i - 1] / s * (p[end] - p[1])
end
end
lidx = 1
ridx = length(k)
while lidx <= (d + 1) && ridx >= (length(k) - d)
k[lidx] = p[1]
k[ridx] = p[end]
lidx += 1
ridx -= 1
end
ps = zeros(eltype(t), n - 2)
s = zero(eltype(t))
for i in 2:(n - 1)
s += p[i]
ps[i - 1] = s
end
if knotVecType == :Uniform
# uniformly spaced knot vector
# this method is not recommended because, if it is used with the chord length method for global interpolation,
# the system of linear equations would be singular.
for i in (d + 2):h
k[i] = k[1] + (i - d - 1) // (h - d) * (k[end] - k[1])
end
elseif knotVecType == :Average
# NOTE: verify that average method can be applied when size of k is less than size of p
# average spaced knot vector
idx = 1
if d + 2 <= h
k[d + 2] = 1 // d * ps[d]
end
for i in (d + 3):h
k[i] = 1 // d * (ps[idx + d] - ps[idx])
idx += 1
end
end
# control points
c = zeros(eltype(u), h)
c[1] = u[1]
c[end] = u[end]
q = zeros(eltype(u), n)
N = zeros(eltype(t), n, h)
for i in 1:n
spline_coefficients!(view(N, i, :), d, k, p[i])
end
for k in 2:(n - 1)
q[k] = u[k] - N[k, 1] * u[1] - N[k, h] * u[end]
end
Q = Matrix{eltype(u)}(undef, h - 2, 1)
for i in 2:(h - 1)
s = 0.0
for k in 2:(n - 1)
s += N[k, i] * q[k]
end
Q[i - 1] = s
end
N = N[2:(end - 1), 2:(h - 1)]
M = transpose(N) * N
P = M \ Q
c[2:(end - 1)] .= vec(P)
N = zeros(eltype(t), h)
BSplineApprox(
u, t, d, h, p, k, c, N, pVecType, knotVecType, extrapolate, assume_linear_t)
end
"""
CubicHermiteSpline(du, u, t; extrapolate = false, cache_parameters = false)
It is a Cubic Hermite interpolation, which is a piece-wise third degree polynomial such that the value and the first derivative are equal to given values in the data points.
## Arguments
- `du`: the derivative at the data points.
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct CubicHermiteSpline{uType, tType, IType, duType, pType, T} <: AbstractInterpolation{T}
du::duType
u::uType
t::tType
I::IType
p::CubicHermiteParameterCache{pType}
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function CubicHermiteSpline(
du, u, t, I, p, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(du), typeof(p.c₁), eltype(u)}(
du, u, t, I, p, extrapolate, Guesser(t), cache_parameters, linear_lookup)
end
end
function CubicHermiteSpline(
du, u, t; extrapolate = false, cache_parameters = false, assume_linear_t = 1e-2)
@assert length(u)==length(du) "Length of `u` is not equal to length of `du`."
u, t = munge_data(u, t)
linear_lookup = seems_linear(assume_linear_t, t)
p = CubicHermiteParameterCache(du, u, t, cache_parameters)
A = CubicHermiteSpline(
du, u, t, nothing, p, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
CubicHermiteSpline(du, u, t, I, p, extrapolate, cache_parameters, linear_lookup)
end
"""
PCHIPInterpolation(u, t; extrapolate = false, safetycopy = true)
It is a PCHIP Interpolation, which is a type of [`CubicHermiteSpline`](@ref) where the derivative values `du` are derived from the input data
in such a way that the interpolation never overshoots the data. See [here](https://www.mathworks.com/content/dam/mathworks/mathworks-dot-com/moler/interp.pdf),
section 3.4 for more details.
## Arguments
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
function PCHIPInterpolation(
u, t; extrapolate = false, cache_parameters = false, assume_linear_t = 1e-2)
u, t = munge_data(u, t)
du = du_PCHIP(u, t)
CubicHermiteSpline(du, u, t; extrapolate, cache_parameters, assume_linear_t)
end
"""
QuinticHermiteSpline(ddu, du, u, t; extrapolate = false, safetycopy = true)
It is a Quintic Hermite interpolation, which is a piece-wise fifth degree polynomial such that the value and the first and second derivative are equal to given values in the data points.
## Arguments
- `ddu`: the second derivative at the data points.
- `du`: the derivative at the data points.
- `u`: data points.
- `t`: time points.
## Keyword Arguments
- `extrapolate`: boolean value to allow extrapolation. Defaults to `false`.
- `cache_parameters`: precompute parameters at initialization for faster interpolation computations. Note: if activated, `u` and `t` should not be modified. Defaults to `false`.
- `assume_linear_t`: boolean value to specify a faster index lookup behaviour for
evenly-distributed abscissae. Alternatively, a numerical threshold may be specified
for a test based on the normalized standard deviation of the difference with respect
to the straight line (see [`looks_linear`](@ref)). Defaults to 1e-2.
"""
struct QuinticHermiteSpline{uType, tType, IType, duType, dduType, pType, T} <:
AbstractInterpolation{T}
ddu::dduType
du::duType
u::uType
t::tType
I::IType
p::QuinticHermiteParameterCache{pType}
extrapolate::Bool
iguesser::Guesser{tType}
cache_parameters::Bool
linear_lookup::Bool
function QuinticHermiteSpline(
ddu, du, u, t, I, p, extrapolate, cache_parameters, assume_linear_t)
linear_lookup = seems_linear(assume_linear_t, t)
new{typeof(u), typeof(t), typeof(I), typeof(du),
typeof(ddu), typeof(p.c₁), eltype(u)}(
ddu, du, u, t, I, p, extrapolate, Guesser(t), cache_parameters, linear_lookup)
end
end
function QuinticHermiteSpline(ddu, du, u, t; extrapolate = false,
cache_parameters = false, assume_linear_t = 1e-2)
@assert length(u)==length(du)==length(ddu) "Length of `u` is not equal to length of `du` or `ddu`."
u, t = munge_data(u, t)
linear_lookup = seems_linear(assume_linear_t, t)
p = QuinticHermiteParameterCache(ddu, du, u, t, cache_parameters)
A = QuinticHermiteSpline(
ddu, du, u, t, nothing, p, extrapolate, cache_parameters, linear_lookup)
I = cumulative_integral(A, cache_parameters)
QuinticHermiteSpline(
ddu, du, u, t, I, p, extrapolate, cache_parameters, linear_lookup)
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 6986 | function _interpolate(A, t)
((t < A.t[1] || t > A.t[end]) && !A.extrapolate) &&
throw(ExtrapolationError())
return _interpolate(A, t, A.iguesser)
end
# Linear Interpolation
function _interpolate(A::LinearInterpolation{<:AbstractVector}, t::Number, iguess)
if isnan(t)
# For correct derivative with NaN
idx = firstindex(A.u)
t1 = t2 = one(eltype(A.t))
u1 = u2 = one(eltype(A.u))
slope = t * get_parameters(A, idx)
else
idx = get_idx(A, t, iguess)
t1, t2 = A.t[idx], A.t[idx + 1]
u1, u2 = A.u[idx], A.u[idx + 1]
slope = get_parameters(A, idx)
end
Δt = t - t1
Δu = slope * Δt
val = u1
Δu_nan = any(isnan.(Δu))
if t == t2 && Δu_nan
val = u2
elseif !(iszero(Δt) && Δu_nan)
val += Δu
end
val = oftype(Δu, val)
val
end
function _interpolate(A::LinearInterpolation{<:AbstractArray}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt = t - A.t[idx]
slope = get_parameters(A, idx)
ax = axes(A.u)[1:(end - 1)]
return A.u[ax..., idx] + slope * Δt
end
# Quadratic Interpolation
_quad_interp_indices(A, t) = _quad_interp_indices(A, t, firstindex(A.t) - 1)
function _quad_interp_indices(A::QuadraticInterpolation, t::Number, iguess)
idx = get_idx(A, t, iguess; idx_shift = A.mode == :Backward ? -1 : 0, ub_shift = -2)
idx, idx + 1, idx + 2
end
function _interpolate(A::QuadraticInterpolation, t::Number, iguess)
i₀, i₁, i₂ = _quad_interp_indices(A, t, iguess)
l₀, l₁, l₂ = get_parameters(A, i₀)
u₀ = l₀ * (t - A.t[i₁]) * (t - A.t[i₂])
u₁ = l₁ * (t - A.t[i₀]) * (t - A.t[i₂])
u₂ = l₂ * (t - A.t[i₀]) * (t - A.t[i₁])
return u₀ + u₁ + u₂
end
# Lagrange Interpolation
function _interpolate(A::LagrangeInterpolation{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess)
findRequiredIdxs!(A, t, idx)
if A.t[A.idxs[1]] == t
return A.u[A.idxs[1]]
end
N = zero(A.u[1])
D = zero(A.t[1])
tmp = N
for i in 1:length(A.idxs)
if isnan(A.bcache[A.idxs[i]])
mult = one(A.t[1])
for j in 1:(i - 1)
mult *= (A.t[A.idxs[i]] - A.t[A.idxs[j]])
end
for j in (i + 1):length(A.idxs)
mult *= (A.t[A.idxs[i]] - A.t[A.idxs[j]])
end
A.bcache[A.idxs[i]] = mult
else
mult = A.bcache[A.idxs[i]]
end
tmp = inv((t - A.t[A.idxs[i]]) * mult)
D += tmp
N += (tmp * A.u[A.idxs[i]])
end
N / D
end
function _interpolate(A::LagrangeInterpolation{<:AbstractMatrix}, t::Number, iguess)
idx = get_idx(A, t, iguess)
findRequiredIdxs!(A, t, idx)
if A.t[A.idxs[1]] == t
return A.u[:, A.idxs[1]]
end
N = zero(A.u[:, 1])
D = zero(A.t[1])
tmp = D
for i in 1:length(A.idxs)
if isnan(A.bcache[A.idxs[i]])
mult = one(A.t[1])
for j in 1:(i - 1)
mult *= (A.t[A.idxs[i]] - A.t[A.idxs[j]])
end
for j in (i + 1):length(A.idxs)
mult *= (A.t[A.idxs[i]] - A.t[A.idxs[j]])
end
A.bcache[A.idxs[i]] = mult
else
mult = A.bcache[A.idxs[i]]
end
tmp = inv((t - A.t[A.idxs[i]]) * mult)
D += tmp
@. N += (tmp * A.u[:, A.idxs[i]])
end
N / D
end
function _interpolate(A::AkimaInterpolation{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess)
wj = t - A.t[idx]
@evalpoly wj A.u[idx] A.b[idx] A.c[idx] A.d[idx]
end
# ConstantInterpolation Interpolation
function _interpolate(A::ConstantInterpolation{<:AbstractVector}, t::Number, iguess)
if A.dir === :left
# :left means that value to the left is used for interpolation
idx = get_idx(A, t, iguess; lb = 1, ub_shift = 0)
else
# :right means that value to the right is used for interpolation
idx = get_idx(A, t, iguess; side = :first, lb = 1, ub_shift = 0)
end
A.u[idx]
end
function _interpolate(A::ConstantInterpolation{<:AbstractMatrix}, t::Number, iguess)
if A.dir === :left
# :left means that value to the left is used for interpolation
idx = get_idx(A, t, iguess; lb = 1, ub_shift = 0)
else
# :right means that value to the right is used for interpolation
idx = get_idx(A, t, iguess; side = :first, lb = 1, ub_shift = 0)
end
A.u[:, idx]
end
# QuadraticSpline Interpolation
function _interpolate(A::QuadraticSpline{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Cᵢ = A.u[idx]
Δt = t - A.t[idx]
σ = get_parameters(A, idx)
return A.z[idx] * Δt + σ * Δt^2 + Cᵢ
end
# CubicSpline Interpolation
function _interpolate(A::CubicSpline{<:AbstractVector}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt₁ = t - A.t[idx]
Δt₂ = A.t[idx + 1] - t
I = (A.z[idx] * Δt₂^3 + A.z[idx + 1] * Δt₁^3) / (6A.h[idx + 1])
c₁, c₂ = get_parameters(A, idx)
C = c₁ * Δt₁
D = c₂ * Δt₂
I + C + D
end
# BSpline Curve Interpolation
function _interpolate(A::BSplineInterpolation{<:AbstractVector{<:Number}},
t::Number,
iguess)
t < A.t[1] && return A.u[1]
t > A.t[end] && return A.u[end]
# change t into param [0 1]
idx = get_idx(A, t, iguess)
t = A.p[idx] + (t - A.t[idx]) / (A.t[idx + 1] - A.t[idx]) * (A.p[idx + 1] - A.p[idx])
n = length(A.t)
N = t isa ForwardDiff.Dual ? zeros(eltype(t), n) : A.N
nonzero_coefficient_idxs = spline_coefficients!(N, A.d, A.k, t)
ucum = zero(eltype(A.u))
for i in nonzero_coefficient_idxs
ucum += N[i] * A.c[i]
end
ucum
end
# BSpline Curve Approx
function _interpolate(A::BSplineApprox{<:AbstractVector{<:Number}}, t::Number, iguess)
t < A.t[1] && return A.u[1]
t > A.t[end] && return A.u[end]
# change t into param [0 1]
idx = get_idx(A, t, iguess)
t = A.p[idx] + (t - A.t[idx]) / (A.t[idx + 1] - A.t[idx]) * (A.p[idx + 1] - A.p[idx])
N = t isa ForwardDiff.Dual ? zeros(eltype(t), A.h) : A.N
nonzero_coefficient_idxs = spline_coefficients!(N, A.d, A.k, t)
ucum = zero(eltype(A.u))
for i in nonzero_coefficient_idxs
ucum += N[i] * A.c[i]
end
ucum
end
# Cubic Hermite Spline
function _interpolate(
A::CubicHermiteSpline{<:AbstractVector{<:Number}}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt₀ = t - A.t[idx]
Δt₁ = t - A.t[idx + 1]
out = A.u[idx] + Δt₀ * A.du[idx]
c₁, c₂ = get_parameters(A, idx)
out += Δt₀^2 * (c₁ + Δt₁ * c₂)
out
end
# Quintic Hermite Spline
function _interpolate(
A::QuinticHermiteSpline{<:AbstractVector{<:Number}}, t::Number, iguess)
idx = get_idx(A, t, iguess)
Δt₀ = t - A.t[idx]
Δt₁ = t - A.t[idx + 1]
out = A.u[idx] + Δt₀ * (A.du[idx] + A.ddu[idx] * Δt₀ / 2)
c₁, c₂, c₃ = get_parameters(A, idx)
out += Δt₀^3 * (c₁ + Δt₁ * (c₂ + c₃ * Δt₁))
out
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 7041 | function findRequiredIdxs!(A::LagrangeInterpolation, t, idx)
n = length(A.t) - 1
i_min, idx_min, idx_max = if t == A.t[idx]
A.idxs[1] = idx
2, idx, idx
else
1, idx + 1, idx
end
for i in i_min:(n + 1)
if idx_min == 1
A.idxs[i:end] .= range(idx_max + 1, idx_max + (n + 2 - i))
break
elseif idx_max == length(A.t)
A.idxs[i:end] .= (idx_min - 1):-1:(idx_min - (n + 2 - i))
break
else
left_diff = abs(t - A.t[idx_min - 1])
right_diff = abs(t - A.t[idx_max + 1])
left_expand = left_diff <= right_diff
end
if left_expand
idx_min -= 1
A.idxs[i] = idx_min
else
idx_max += 1
A.idxs[i] = idx_max
end
end
return idx
end
function spline_coefficients!(N, d, k, u::Number)
N .= zero(u)
if u == k[1]
N[1] = one(u)
return 1:1
elseif u == k[end]
N[end] = one(u)
return length(N):length(N)
else
i = findfirst(x -> x > u, k) - 1
N[i] = one(u)
for deg in 1:d
N[i - deg] = (k[i + 1] - u) / (k[i + 1] - k[i - deg + 1]) * N[i - deg + 1]
for j in (i - deg + 1):(i - 1)
N[j] = (u - k[j]) / (k[j + deg] - k[j]) * N[j] +
(k[j + deg + 1] - u) / (k[j + deg + 1] - k[j + 1]) * N[j + 1]
end
N[i] = (u - k[i]) / (k[i + deg] - k[i]) * N[i]
end
return (i - d):i
end
end
function spline_coefficients!(N, d, k, u::AbstractVector)
for i in 1:size(N)[2]
spline_coefficients!(view(N, i, :), d, k, u[i])
end
return nothing
end
# helper function for data manipulation
function munge_data(u::AbstractVector{<:Real}, t::AbstractVector{<:Real})
return u, t
end
function munge_data(u::AbstractVector, t::AbstractVector)
Tu = Base.nonmissingtype(eltype(u))
Tt = Base.nonmissingtype(eltype(t))
@assert length(t) == length(u)
non_missing_indices = collect(
i for i in 1:length(t)
if !ismissing(u[i]) && !ismissing(t[i])
)
u = Tu.([u[i] for i in non_missing_indices])
t = Tt.([t[i] for i in non_missing_indices])
return u, t
end
function munge_data(U::StridedMatrix, t::AbstractVector)
TU = Base.nonmissingtype(eltype(U))
Tt = Base.nonmissingtype(eltype(t))
@assert length(t) == size(U, 2)
non_missing_indices = collect(
i for i in 1:length(t)
if !any(ismissing, U[:, i]) && !ismissing(t[i])
)
U = hcat([TU.(U[:, i]) for i in non_missing_indices]...)
t = Tt.([t[i] for i in non_missing_indices])
return U, t
end
function munge_data(U::AbstractArray{T, N}, t) where {T, N}
TU = Base.nonmissingtype(eltype(U))
Tt = Base.nonmissingtype(eltype(t))
@assert length(t) == size(U, ndims(U))
ax = axes(U)[1:(end - 1)]
non_missing_indices = collect(
i for i in 1:length(t)
if !any(ismissing, U[ax..., i]) && !ismissing(t[i])
)
U = cat([TU.(U[ax..., i]) for i in non_missing_indices]...; dims = ndims(U))
t = Tt.([t[i] for i in non_missing_indices])
return U, t
end
seems_linear(assume_linear_t::Bool, _) = assume_linear_t
seems_linear(assume_linear_t::Number, t) = looks_linear(t; threshold = assume_linear_t)
"""
looks_linear(t; threshold = 1e-2)
Determine if the abscissae `t` are regularly distributed, taking the standard deviation of
the difference between the array of abscissae with respect to the straight line linking
its first and last elements, normalized by the range of `t`. If this standard deviation is
below the given `threshold`, the vector looks linear (return true). Internal function -
interface may change.
"""
function looks_linear(t; threshold = 1e-2)
length(t) <= 2 && return true
t_0, t_f = first(t), last(t)
t_span = t_f - t_0
tspan_over_N = t_span * length(t)^(-1)
norm_var = sum(
(t_i - t_0 - i * tspan_over_N)^2 for (i, t_i) in enumerate(t)
) / (length(t) * t_span^2)
norm_var < threshold^2
end
function get_idx(A::AbstractInterpolation, t, iguess::Union{<:Integer, Guesser}; lb = 1,
ub_shift = -1, idx_shift = 0, side = :last)
tvec = A.t
ub = length(tvec) + ub_shift
return if side == :last
clamp(searchsortedlastcorrelated(tvec, t, iguess) + idx_shift, lb, ub)
elseif side == :first
clamp(searchsortedfirstcorrelated(tvec, t, iguess) + idx_shift, lb, ub)
else
error("side must be :first or :last")
end
end
function cumulative_integral(A, cache_parameters)
if cache_parameters && hasmethod(_integral, Tuple{typeof(A), Number, Number})
integral_values = [_integral(A, idx, A.t[idx + 1]) - _integral(A, idx, A.t[idx])
for idx in 1:(length(A.t) - 1)]
pushfirst!(integral_values, zero(first(integral_values)))
cumsum(integral_values)
else
promote_type(eltype(A.u), eltype(A.t))[]
end
end
function get_parameters(A::LinearInterpolation, idx)
if A.cache_parameters
A.p.slope[idx]
else
linear_interpolation_parameters(A.u, A.t, idx)
end
end
function get_parameters(A::QuadraticInterpolation, idx)
if A.cache_parameters
A.p.l₀[idx], A.p.l₁[idx], A.p.l₂[idx]
else
quadratic_interpolation_parameters(A.u, A.t, idx)
end
end
function get_parameters(A::QuadraticSpline, idx)
if A.cache_parameters
A.p.σ[idx]
else
quadratic_spline_parameters(A.z, A.t, idx)
end
end
function get_parameters(A::CubicSpline, idx)
if A.cache_parameters
A.p.c₁[idx], A.p.c₂[idx]
else
cubic_spline_parameters(A.u, A.h, A.z, idx)
end
end
function get_parameters(A::CubicHermiteSpline, idx)
if A.cache_parameters
A.p.c₁[idx], A.p.c₂[idx]
else
cubic_hermite_spline_parameters(A.du, A.u, A.t, idx)
end
end
function get_parameters(A::QuinticHermiteSpline, idx)
if A.cache_parameters
A.p.c₁[idx], A.p.c₂[idx], A.p.c₃[idx]
else
quintic_hermite_spline_parameters(A.ddu, A.du, A.u, A.t, idx)
end
end
function du_PCHIP(u, t)
h = diff(u)
δ = h ./ diff(t)
s = sign.(δ)
function _du(k)
sₖ₋₁, sₖ = if k == 1
s[1], s[2]
elseif k == lastindex(t)
s[end - 1], s[end]
else
s[k - 1], s[k]
end
if sₖ₋₁ == 0 && sₖ == 0
zero(eltype(δ))
elseif sₖ₋₁ == sₖ
if k == 1
((2 * h[1] + h[2]) * δ[1] - h[1] * δ[2]) / (h[1] + h[2])
elseif k == lastindex(t)
((2 * h[end] + h[end - 1]) * δ[end] - h[end] * δ[end - 1]) /
(h[end] + h[end - 1])
else
w₁ = 2h[k] + h[k - 1]
w₂ = h[k] + 2h[k - 1]
δ[k - 1] * δ[k] * (w₁ + w₂) / (w₁ * δ[k] + w₂ * δ[k - 1])
end
else
zero(eltype(δ))
end
end
return _du.(eachindex(t))
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 2383 | import Base: append!, push!
function add_integral_values!(A)
integral_values = cumsum([_integral(A, idx, A.t[idx + 1]) - _integral(A, idx, A.t[idx])
for idx in (length(A.I) - 1):(length(A.t) - 1)])
pop!(A.I)
integral_values .+= last(A.I)
append!(A.I, integral_values)
end
function push!(A::LinearInterpolation{U, T}, u::eltype(U), t::eltype(T)) where {U, T}
push!(A.u, u)
push!(A.t, t)
if A.cache_parameters
slope = linear_interpolation_parameters(A.u, A.t, length(A.t) - 1)
push!(A.p.slope, slope)
add_integral_values!(A)
end
A
end
function push!(A::QuadraticInterpolation{U, T}, u::eltype(U), t::eltype(T)) where {U, T}
push!(A.u, u)
push!(A.t, t)
if A.cache_parameters
l₀, l₁, l₂ = quadratic_interpolation_parameters(A.u, A.t, length(A.t) - 2)
push!(A.p.l₀, l₀)
push!(A.p.l₁, l₁)
push!(A.p.l₂, l₂)
add_integral_values!(A)
end
A
end
function push!(A::ConstantInterpolation{U, T}, u::eltype(U), t::eltype(T)) where {U, T}
push!(A.u, u)
push!(A.t, t)
if A.cache_parameters
add_integral_values!(A)
end
A
end
function append!(
A::LinearInterpolation{U, T}, u::U, t::T) where {
U, T}
length_old = length(A.t)
u, t = munge_data(u, t)
append!(A.u, u)
append!(A.t, t)
if A.cache_parameters
slope = linear_interpolation_parameters.(
Ref(A.u), Ref(A.t), length_old:(length(A.t) - 1))
append!(A.p.slope, slope)
add_integral_values!(A)
end
A
end
function append!(
A::ConstantInterpolation{U, T}, u::U, t::T) where {
U, T}
u, t = munge_data(u, t)
append!(A.u, u)
append!(A.t, t)
if A.cache_parameters
add_integral_values!(A)
end
A
end
function append!(
A::QuadraticInterpolation{U, T}, u::U, t::T) where {
U, T}
length_old = length(A.t)
u, t = munge_data(u, t)
append!(A.u, u)
append!(A.t, t)
if A.cache_parameters
parameters = quadratic_interpolation_parameters.(
Ref(A.u), Ref(A.t), (length_old - 1):(length(A.t) - 2))
l₀, l₁, l₂ = collect.(eachrow(hcat(collect.(parameters)...)))
append!(A.p.l₀, l₀)
append!(A.p.l₁, l₁)
append!(A.p.l₂, l₂)
add_integral_values!(A)
end
A
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 5261 | struct LinearParameterCache{pType}
slope::pType
end
function LinearParameterCache(u, t, cache_parameters)
if cache_parameters
slope = linear_interpolation_parameters.(Ref(u), Ref(t), 1:(length(t) - 1))
LinearParameterCache(slope)
else
# Compute parameters once to infer types
slope = linear_interpolation_parameters(u, t, 1)
LinearParameterCache(typeof(slope)[])
end
end
# Prevent e.g. Inf - Inf = NaN
function safe_diff(b, a::T) where {T}
b == a ? zero(T) : b - a
end
function linear_interpolation_parameters(u::AbstractArray{T, N}, t, idx) where {T, N}
Δu = if N > 1
ax = axes(u)
safe_diff.(
u[ax[1:(end - 1)]..., (idx + 1)], u[ax[1:(end - 1)]..., idx])
else
safe_diff(u[idx + 1], u[idx])
end
Δt = t[idx + 1] - t[idx]
slope = Δu / Δt
slope = iszero(Δt) ? zero(slope) : slope
return slope
end
struct QuadraticParameterCache{pType}
l₀::pType
l₁::pType
l₂::pType
end
function QuadraticParameterCache(u, t, cache_parameters)
if cache_parameters
parameters = quadratic_interpolation_parameters.(
Ref(u), Ref(t), 1:(length(t) - 2))
l₀, l₁, l₂ = collect.(eachrow(stack(collect.(parameters))))
QuadraticParameterCache(l₀, l₁, l₂)
else
# Compute parameters once to infer types
l₀, l₁, l₂ = quadratic_interpolation_parameters(u, t, 1)
pType = typeof(l₀)
QuadraticParameterCache(pType[], pType[], pType[])
end
end
function quadratic_interpolation_parameters(u, t, idx)
if u isa AbstractMatrix
u₀ = u[:, idx]
u₁ = u[:, idx + 1]
u₂ = u[:, idx + 2]
else
u₀ = u[idx]
u₁ = u[idx + 1]
u₂ = u[idx + 2]
end
t₀ = t[idx]
t₁ = t[idx + 1]
t₂ = t[idx + 2]
Δt₀ = t₁ - t₀
Δt₁ = t₂ - t₁
Δt₂ = t₂ - t₀
l₀ = u₀ / (Δt₀ * Δt₂)
l₁ = -u₁ / (Δt₀ * Δt₁)
l₂ = u₂ / (Δt₂ * Δt₁)
return l₀, l₁, l₂
end
struct QuadraticSplineParameterCache{pType}
σ::pType
end
function QuadraticSplineParameterCache(z, t, cache_parameters)
if cache_parameters
σ = quadratic_spline_parameters.(Ref(z), Ref(t), 1:(length(t) - 1))
QuadraticSplineParameterCache(σ)
else
# Compute parameters once to infer types
σ = quadratic_spline_parameters(z, t, 1)
QuadraticSplineParameterCache(typeof(σ)[])
end
end
function quadratic_spline_parameters(z, t, idx)
σ = 1 // 2 * (z[idx + 1] - z[idx]) / (t[idx + 1] - t[idx])
return σ
end
struct CubicSplineParameterCache{pType}
c₁::pType
c₂::pType
end
function CubicSplineParameterCache(u, h, z, cache_parameters)
if cache_parameters
parameters = cubic_spline_parameters.(
Ref(u), Ref(h), Ref(z), 1:(size(u)[end] - 1))
c₁, c₂ = collect.(eachrow(stack(collect.(parameters))))
CubicSplineParameterCache(c₁, c₂)
else
# Compute parameters once to infer types
c₁, c₂ = cubic_spline_parameters(u, h, z, 1)
pType = typeof(c₁)
CubicSplineParameterCache(pType[], pType[])
end
end
function cubic_spline_parameters(u, h, z, idx)
c₁ = (u[idx + 1] / h[idx + 1] - z[idx + 1] * h[idx + 1] / 6)
c₂ = (u[idx] / h[idx + 1] - z[idx] * h[idx + 1] / 6)
return c₁, c₂
end
struct CubicHermiteParameterCache{pType}
c₁::pType
c₂::pType
end
function CubicHermiteParameterCache(du, u, t, cache_parameters)
if cache_parameters
parameters = cubic_hermite_spline_parameters.(
Ref(du), Ref(u), Ref(t), 1:(length(t) - 1))
c₁, c₂ = collect.(eachrow(stack(collect.(parameters))))
CubicHermiteParameterCache(c₁, c₂)
else
# Compute parameters once to infer types
c₁, c₂ = cubic_hermite_spline_parameters(du, u, t, 1)
pType = typeof(c₁)
CubicHermiteParameterCache(pType[], pType[])
end
end
function cubic_hermite_spline_parameters(du, u, t, idx)
Δt = t[idx + 1] - t[idx]
u₀ = u[idx]
u₁ = u[idx + 1]
du₀ = du[idx]
du₁ = du[idx + 1]
c₁ = (u₁ - u₀ - du₀ * Δt) / Δt^2
c₂ = (du₁ - du₀ - 2c₁ * Δt) / Δt^2
return c₁, c₂
end
struct QuinticHermiteParameterCache{pType}
c₁::pType
c₂::pType
c₃::pType
end
function QuinticHermiteParameterCache(ddu, du, u, t, cache_parameters)
if cache_parameters
parameters = quintic_hermite_spline_parameters.(
Ref(ddu), Ref(du), Ref(u), Ref(t), 1:(length(t) - 1))
c₁, c₂, c₃ = collect.(eachrow(stack(collect.(parameters))))
QuinticHermiteParameterCache(c₁, c₂, c₃)
else
# Compute parameters once to infer types
c₁, c₂, c₃ = quintic_hermite_spline_parameters(ddu, du, u, t, 1)
pType = typeof(c₁)
QuinticHermiteParameterCache(pType[], pType[], pType[])
end
end
function quintic_hermite_spline_parameters(ddu, du, u, t, idx)
Δt = t[idx + 1] - t[idx]
u₀ = u[idx]
u₁ = u[idx + 1]
du₀ = du[idx]
du₁ = du[idx + 1]
ddu₀ = ddu[idx]
ddu₁ = ddu[idx + 1]
c₁ = (u₁ - u₀ - du₀ * Δt - ddu₀ * Δt^2 / 2) / Δt^3
c₂ = (3u₀ - 3u₁ + 2(du₀ + du₁ / 2)Δt + ddu₀ * Δt^2 / 2) / Δt^4
c₃ = (6u₁ - 6u₀ - 3(du₀ + du₁)Δt + (ddu₁ - ddu₀)Δt^2 / 2) / Δt^5
return c₁, c₂, c₃
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 10512 | ################################################################################
# Type recipes #
################################################################################
function to_plottable(A::AbstractInterpolation; plotdensity = 10_000, denseplot = true)
t = sort(A.t)
start = t[1]
stop = t[end]
if denseplot
plott = collect(range(start, stop = stop, length = plotdensity))
else
plott = t
end
output = A.(plott)
plott, output
end
@recipe function f(A::AbstractInterpolation; plotdensity = 10_000, denseplot = true)
@series begin
seriestype := :path
label --> string(nameof(typeof(A)))
to_plottable(A; plotdensity = plotdensity, denseplot = denseplot)
end
@series begin
seriestype := :scatter
label --> "Data points"
A.t, A.u
end
end
################################################################################
# Series recipes #
################################################################################
############################################################
# Interpolations #
############################################################
########################################
# Linear Interpolation #
########################################
@recipe function f(::Type{Val{:linear_interp}},
x,
y,
z;
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(LinearInterpolation(T.(y), T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "LinearInterpolation"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Quadratic Interpolation #
########################################
@recipe function f(::Type{Val{:quadratic_interp}},
x,
y,
z;
mode = :Forward,
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(
QuadraticInterpolation(T.(y),
T.(x), mode; extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "QuadraticInterpolation"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Lagrange Interpolation #
########################################
@recipe function f(::Type{Val{:lagrange_interp}},
x, y, z;
n = length(x) - 1,
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(LagrangeInterpolation(T.(y),
T.(x),
n; extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "LagrangeInterpolation"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Quadratic Spline #
########################################
@recipe function f(::Type{Val{:quadratic_spline}},
x,
y,
z;
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(QuadraticSpline(T.(y),
T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "QuadraticSpline"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Cubic Spline #
########################################
@recipe function f(::Type{Val{:cubic_spline}},
x,
y,
z;
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(CubicSpline(T.(y),
T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "CubicSpline"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Akima interpolation #
########################################
@recipe function f(::Type{Val{:akima_interp}},
x,
y,
z;
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(AkimaInterpolation(T.(y),
T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "AkimaInterpolation"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# B-spline Interpolation #
########################################
@recipe function f(::Type{Val{:bspline_interp}},
x, y, z;
d = 5,
pVecType = :ArcLen,
knotVecType = :Average,
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(
BSplineInterpolation(T.(y),
T.(x),
d,
pVecType,
knotVecType; extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "BSplineInterpolation"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# B-spline (approximation) #
########################################
@recipe function f(::Type{Val{:bspline_approx}},
x, y, z;
d = 5,
h = length(x) - 1,
pVecType = :ArcLen,
knotVecType = :Average,
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(
BSplineApprox(T.(y),
T.(x),
d,
h,
pVecType,
knotVecType; extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "BSplineApprox"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Cubic Hermite Spline #
########################################
@recipe function f(::Type{Val{:cubic_hermite_spline}},
x,
y,
z;
du = nothing,
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
isnothing(du) && error("Provide `du` as a keyword argument.")
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(
CubicHermiteSpline(T.(du), T.(y),
T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "CubicHermiteSpline"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# PCHIP Interpolation #
########################################
@recipe function f(::Type{Val{:pchip_interp}},
x,
y,
z;
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(PCHIPInterpolation(T.(y),
T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "PCHIP Interpolation"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
########################################
# Quintic Hermite Spline #
########################################
@recipe function f(::Type{Val{:quintic_hermite_spline}},
x,
y,
z;
du = nothing,
ddu = nothing,
extrapolate = false,
safetycopy = false,
plotdensity = 10_000,
denseplot = true)
(isnothing(du) || isnothing(ddu)) &&
error("Provide `du` and `ddu` as keyword arguments.")
T = promote_type(eltype(y), eltype(x))
nx, ny = to_plottable(
QuinticHermiteSpline(T.(ddu), T.(du), T.(y),
T.(x); extrapolate, safetycopy);
plotdensity = plotdensity,
denseplot = denseplot)
@series begin
seriestype := :path
label --> "QuinticHermiteSpline"
x := nx
y := ny
end
@series begin
seriestype := :scatter
label --> "Data points"
x := x
y := y
end
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 1877 | ###################### Generic Dispatches ######################
function Base.show(io::IO, mime::MIME"text/plain", interp::AbstractInterpolation)
print(io, get_show(interp))
header = ["time", get_names(interp.u)...]
data = hcat(interp.t, get_data(interp.u))
pretty_table(io, data; header = header, vcrop_mode = :middle)
end
function get_show(interp::AbstractInterpolation)
return string(nameof(typeof(interp))) * " with $(length(interp.t)) points\n"
end
function get_data(u::AbstractVector)
return u
end
function get_data(u::AbstractVector{<:AbstractVector})
return reduce(hcat, u)'
end
function get_data(u::AbstractMatrix)
return u'
end
function get_names(u::AbstractVector)
return ["u"]
end
function get_names(u::AbstractVector{<:AbstractVector})
return ["u$i" for i in eachindex(first(u))]
end
function get_names(u::AbstractMatrix)
return ["u$i" for i in axes(u, 1)]
end
###################### Specific Dispatches ######################
function get_show(interp::QuadraticInterpolation)
return string(nameof(typeof(interp))) *
" with $(length(interp.t)) points, $(interp.mode) mode\n"
end
function get_show(interp::LagrangeInterpolation)
return string(nameof(typeof(interp))) *
" with $(length(interp.t)) points, with order $(interp.n)\n"
end
function get_show(interp::ConstantInterpolation)
return string(nameof(typeof(interp))) *
" with $(length(interp.t)) points, in $(interp.dir) direction\n"
end
function get_show(interp::BSplineInterpolation)
return string(nameof(typeof(interp))) *
" with $(length(interp.t)) points, with degree $(interp.d)\n"
end
function get_show(interp::BSplineApprox)
return string(nameof(typeof(interp))) *
" with $(length(interp.t)) points, with degree $(interp.d), number of control points $(interp.h)\n"
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 11540 | using DataInterpolations, Test
using FindFirstFunctions: searchsortedfirstcorrelated
using FiniteDifferences
using DataInterpolations: derivative
using Symbolics
using StableRNGs
using RegularizationTools
using Optim
using ForwardDiff
function test_derivatives(method; args = [], kwargs = [], name::String)
func = method(args...; kwargs..., extrapolate = true)
(; t) = func
trange = collect(range(minimum(t) - 5.0, maximum(t) + 5.0, step = 0.1))
trange_exclude = filter(x -> !in(x, t), trange)
@testset "$name" begin
# Rest of the points
for _t in trange_exclude
cdiff = central_fdm(5, 1; geom = true)(func, _t)
adiff = derivative(func, _t)
@test isapprox(cdiff, adiff, atol = 1e-8)
adiff2 = derivative(func, _t, 2)
cdiff2 = central_fdm(5, 1; geom = true)(t -> derivative(func, t), _t)
@test isapprox(cdiff2, adiff2, atol = 1e-8)
end
# Interpolation time points
for _t in t[2:(end - 1)]
if func isa BSplineInterpolation || func isa BSplineApprox ||
func isa CubicHermiteSpline
fdiff = forward_fdm(5, 1; geom = true)(func, _t)
fdiff2 = forward_fdm(5, 1; geom = true)(t -> derivative(func, t), _t)
else
fdiff = backward_fdm(5, 1; geom = true)(func, _t)
fdiff2 = backward_fdm(5, 1; geom = true)(t -> derivative(func, t), _t)
end
adiff = derivative(func, _t)
adiff2 = derivative(func, _t, 2)
@test isapprox(fdiff, adiff, atol = 1e-8)
@test isapprox(fdiff2, adiff2, atol = 1e-8)
# Cached index
if hasproperty(func, :iguesser) && !func.iguesser.linear_lookup
@test abs(func.iguesser.idx_prev[] -
searchsortedfirstcorrelated(func.t, _t, func.iguesser(_t))) <= 1
end
end
# t = t0
fdiff = forward_fdm(5, 1; geom = true)(func, t[1])
adiff = derivative(func, t[1])
@test isapprox(fdiff, adiff, atol = 1e-8)
if !(func isa BSplineInterpolation || func isa BSplineApprox)
fdiff2 = forward_fdm(5, 1; geom = true)(t -> derivative(func, t), t[1])
adiff2 = derivative(func, t[1], 2)
@test isapprox(fdiff2, adiff2, atol = 1e-8)
end
# t = tend
fdiff = backward_fdm(5, 1; geom = true)(func, t[end])
adiff = derivative(func, t[end])
@test isapprox(fdiff, adiff, atol = 1e-8)
if !(func isa BSplineInterpolation || func isa BSplineApprox)
fdiff2 = backward_fdm(5, 1; geom = true)(t -> derivative(func, t), t[end])
adiff2 = derivative(func, t[end], 2)
@test isapprox(fdiff2, adiff2, atol = 1e-8)
end
end
@test_throws DataInterpolations.DerivativeNotFoundError derivative(
func, t[1], 3)
func = method(args...)
@test_throws DataInterpolations.ExtrapolationError derivative(func, t[1] - 1.0)
@test_throws DataInterpolations.ExtrapolationError derivative(func, t[end] + 1.0)
@test_throws DataInterpolations.DerivativeNotFoundError derivative(
func, t[1], 3)
end
@testset "Linear Interpolation" begin
u = vcat(collect(1:5), 2 * collect(6:10))
t = 1.0collect(1:10)
test_derivatives(
LinearInterpolation; args = [u, t], name = "Linear Interpolation (Vector)")
u = vcat(2.0collect(1:10)', 3.0collect(1:10)')
test_derivatives(
LinearInterpolation; args = [u, t], name = "Linear Interpolation (Matrix)")
# Issue: https://github.com/SciML/DataInterpolations.jl/issues/303
u = [3.0, 3.0]
t = [0.0, 2.0]
test_derivatives(
LinearInterpolation; args = [u, t], name = "Linear Interpolation with two points")
end
@testset "Quadratic Interpolation" begin
u = [1.0, 4.0, 9.0, 16.0]
t = [1.0, 2.0, 3.0, 4.0]
test_derivatives(QuadraticInterpolation, args = [u, t],
name = "Quadratic Interpolation (Vector)")
test_derivatives(QuadraticInterpolation;
args = [u, t, :Backward],
name = "Quadratic Interpolation (Vector), backward")
u = [1.0 4.0 9.0 16.0; 1.0 4.0 9.0 16.0]
test_derivatives(QuadraticInterpolation;
args = [u, t],
name = "Quadratic Interpolation (Matrix)")
end
@testset "Lagrange Interpolation" begin
u = [1.0, 4.0, 9.0]
t = [1.0, 2.0, 3.0]
test_derivatives(
LagrangeInterpolation; args = [u, t], name = "Lagrange Interpolation (Vector)")
u = [1.0 4.0 9.0; 1.0 2.0 3.0]
test_derivatives(
LagrangeInterpolation; args = [u, t], name = "Lagrange Interpolation (Matrix)")
u = [[1.0, 4.0, 9.0], [3.0, 7.0, 4.0], [5.0, 4.0, 1.0]]
test_derivatives(LagrangeInterpolation; args = [u, t],
name = "Lagrange Interpolation (Vector of Vectors)")
u = [[3.0 1.0 4.0; 1.0 5.0 9.0], [2.0 6.0 5.0; 3.0 5.0 8.0], [9.0 7.0 9.0; 3.0 2.0 3.0]]
test_derivatives(LagrangeInterpolation; args = [u, t],
name = "Lagrange Interpolation (Vector of Matrices)")
end
@testset "Akima Interpolation" begin
u = [0.0, 2.0, 1.0, 3.0, 2.0, 6.0, 5.5, 5.5, 2.7, 5.1, 3.0]
t = collect(0.0:10.0)
test_derivatives(AkimaInterpolation; args = [u, t], name = "Akima Interpolation")
@testset "Akima smooth derivative at end points" begin
A = AkimaInterpolation(u, t)
@test derivative(A, t[1]) ≈ derivative(A, nextfloat(t[1]))
@test derivative(A, t[end]) ≈ derivative(A, prevfloat(t[end]))
end
end
@testset "Constant Interpolation" begin
u = [0.0, 2.0, 1.0, 3.0, 2.0, 6.0, 5.5, 5.5, 2.7, 5.1, 3.0]
t = collect(0.0:11.0)
A = ConstantInterpolation(u, t)
t2 = collect(0.0:10.0)
@test all(isnan, derivative.(Ref(A), t))
@test all(derivative.(Ref(A), t2 .+ 0.1) .== 0.0)
end
@testset "Quadratic Spline" begin
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
test_derivatives(
QuadraticSpline; args = [u, t], name = "Quadratic Interpolation (Vector)")
u = [[1.0, 2.0, 9.0], [3.0, 7.0, 5.0], [5.0, 4.0, 1.0]]
test_derivatives(QuadraticSpline; args = [u, t],
name = "Quadratic Interpolation (Vector of Vectors)")
u = [[1.0 4.0 9.0; 5.0 9.0 2.0], [3.0 7.0 4.0; 6.0 5.0 3.0], [5.0 4.0 1.0; 2.0 3.0 8.0]]
test_derivatives(QuadraticSpline; args = [u, t],
name = "Quadratic Interpolation (Vector of Matrices)")
end
@testset "Cubic Spline" begin
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
test_derivatives(
CubicSpline; args = [u, t], name = "Cubic Spline Interpolation (Vector)")
u = [[1.0, 2.0, 9.0], [3.0, 7.0, 5.0], [5.0, 4.0, 1.0]]
test_derivatives(CubicSpline; args = [u, t],
name = "Cubic Spline Interpolation (Vector of Vectors)")
u = [[1.0 4.0 9.0; 5.0 9.0 2.0], [3.0 7.0 4.0; 6.0 5.0 3.0], [5.0 4.0 1.0; 2.0 3.0 8.0]]
test_derivatives(CubicSpline; args = [u, t],
name = "Cubic Spline Interpolation (Vector of Matrices)")
end
@testset "BSplines" begin
t = [0, 62.25, 109.66, 162.66, 205.8, 252.3]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
test_derivatives(BSplineInterpolation;
args = [u, t, 2,
:Uniform,
:Uniform],
name = "BSpline Interpolation (Uniform, Uniform)")
test_derivatives(BSplineInterpolation;
args = [u, t, 2,
:ArcLen,
:Average],
name = "BSpline Interpolation (Arclen, Average)")
test_derivatives(BSplineApprox;
args = [u, t,
3,
4,
:Uniform,
:Uniform],
name = "BSpline Approx (Uniform, Uniform)")
end
@testset "Cubic Hermite Spline" begin
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
test_derivatives(CubicHermiteSpline; args = [du, u, t],
name = "Cubic Hermite Spline")
A = CubicHermiteSpline(du, u, t; extrapolate = true)
@test derivative.(Ref(A), t) ≈ du
@test derivative(A, 100.0)≈0.0105409 rtol=1e-5
@test derivative(A, 300.0)≈-0.0806717 rtol=1e-5
end
@testset "Quintic Hermite Spline" begin
ddu = [0.0, -0.00033, 0.0051, -0.0067, 0.0029, 0.0]
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
test_derivatives(QuinticHermiteSpline; args = [ddu, du, u, t],
name = "Quintic Hermite Spline")
A = QuinticHermiteSpline(ddu, du, u, t; extrapolate = true)
@test derivative.(Ref(A), t) ≈ du
@test derivative.(Ref(A), t, 2) ≈ ddu
@test derivative(A, 100.0)≈0.0103916 rtol=1e-5
@test derivative(A, 300.0)≈0.0331361 rtol=1e-5
end
@testset "RegularizationSmooth" begin
npts = 50
xmin = 0.0
xspan = 3 / 2 * π
x = collect(range(xmin, xmin + xspan, length = npts))
rng = StableRNG(655)
x = x + xspan / npts * (rand(rng, npts) .- 0.5)
# select a subset randomly
idx = unique(rand(rng, collect(eachindex(x)), 20))
t = x[unique(idx)]
npts = length(t)
ut = sin.(t)
stdev = 1e-1 * maximum(ut)
u = ut + stdev * randn(rng, npts)
# data must be ordered if t̂ is not provided
idx = sortperm(t)
tₒ = t[idx]
uₒ = u[idx]
A = RegularizationSmooth(uₒ, tₒ; alg = :fixed)
test_derivatives(RegularizationSmooth; args = [uₒ, tₒ],
kwargs = [:alg => :fixed],
name = "RegularizationSmooth")
end
@testset "Curvefit" begin
rng = StableRNG(12345)
model(x, p) = @. p[1] / (1 + exp(x - p[2]))
t = range(-10, stop = 10, length = 40)
u = model(t, [1.0, 2.0]) + 0.01 * randn(rng, length(t))
p0 = [0.5, 0.5]
test_derivatives(Curvefit; args = [u, t, model, p0, LBFGS()], name = "Curvefit")
end
@testset "Symbolic derivatives" begin
u = [0.0, 1.5, 0.0]
t = [0.0, 0.5, 1.0]
A = QuadraticSpline(u, t)
@variables τ, ω(τ)
D = Symbolics.Differential(τ)
D2 = Symbolics.Differential(τ)^2
expr = A(ω)
@test isequal(Symbolics.derivative(expr, τ), D(ω) * DataInterpolations.derivative(A, ω))
derivexpr1 = expand_derivatives(substitute(D(A(ω)), Dict(ω => 0.5τ)))
derivexpr2 = expand_derivatives(substitute(D2(A(ω)), Dict(ω => 0.5τ)))
symfunc1 = Symbolics.build_function(derivexpr1, τ; expression = Val{false})
symfunc2 = Symbolics.build_function(derivexpr2, τ; expression = Val{false})
@test symfunc1(0.5) == 0.5 * 3
@test symfunc2(0.5) == 0.5 * 6
u = [0.0, 1.5, 0.0]
t = [0.0, 0.5, 1.0]
@variables τ
D = Symbolics.Differential(τ)
D2 = Symbolics.Differential(τ)^2
D3 = Symbolics.Differential(τ)^3
f = LinearInterpolation(u, t)
df = expand_derivatives(D(f(τ)))
df2 = expand_derivatives(D2(f(τ)))
df3 = expand_derivatives(D3(f(τ)))
symfunc1 = Symbolics.build_function(df, τ; expression = Val{false})
symfunc2 = Symbolics.build_function(df2, τ; expression = Val{false})
symfunc3 = Symbolics.build_function(df3, τ; expression = Val{false})
ts = 0.0:0.1:1.0
@test all(map(ti -> symfunc1(ti) == derivative(f, ti), ts))
@test all(map(ti -> symfunc2(ti) == derivative(f, ti, 2), ts))
@test_throws DataInterpolations.DerivativeNotFoundError symfunc3(ts[1])
end
@testset "Jacobian tests" begin
u = rand(5)
t = 0:4
interp = LinearInterpolation(u, t, extrapolate = true)
grad1 = ForwardDiff.derivative(interp, 2.4)
myvec = rand(20) .* 4.0
interp(myvec)
grad = ForwardDiff.jacobian(interp, myvec)
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 1548 | using DataInterpolations
using DataInterpolations: integral, derivative, invert_integral
using FiniteDifferences
function test_integral_inverses(method; args = [], kwargs = [])
A = method(args...; kwargs..., extrapolate = true)
@test hasfield(typeof(A), :I)
A_intinv = invert_integral(A)
@test A_intinv isa DataInterpolations.AbstractIntegralInverseInterpolation
ts = range(first(A.t), last(A.t), length = 100)
Is = integral.(Ref(A), ts)
ts_ = A_intinv.(Is)
@test ts ≈ ts_
for I in Is
cdiff = forward_fdm(5, 1; geom = true)(A_intinv, I)
adiff = derivative(A_intinv, I)
@test cdiff ≈ adiff
end
end
@testset "Linear Interpolation" begin
t = collect(1:5)
u = [1.0, 1.0, 2.0, 4.0, 3.0]
test_integral_inverses(LinearInterpolation; args = [u, t])
u = [1.0, -1.0, 2.0, 4.0, 3.0]
A = LinearInterpolation(u, t)
@test_throws DataInterpolations.IntegralNotInvertibleError invert_integral(A)
end
@testset "Constant Interpolation" begin
t = collect(1:5)
u = [1.0, 1.0, 2.0, 4.0, 3.0]
test_integral_inverses(ConstantInterpolation; args = [u, t])
test_integral_inverses(ConstantInterpolation; args = [u, t], kwargs = [:dir => :right])
u = [1.0, -1.0, 2.0, 4.0, 3.0]
A = ConstantInterpolation(u, t)
@test_throws DataInterpolations.IntegralNotInvertibleError invert_integral(A)
end
t = collect(1:5)
u = [1.0, 1.0, 2.0, 4.0, 3.0]
A = QuadraticInterpolation(u, t)
@test_throws DataInterpolations.IntegralInverseNotFoundError invert_integral(A)
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 7602 | using DataInterpolations, Test
using QuadGK
using DataInterpolations: integral
using Optim, ForwardDiff
using RegularizationTools
using StableRNGs
function test_integral(method; args = [], kwargs = [], name::String)
func = method(args...; kwargs..., extrapolate = true)
(; t) = func
t1 = minimum(t)
t2 = maximum(t)
@testset "$name" begin
# integral(A, t1, t2)
qint, err = quadgk(func, t1, t2; atol = 1e-12, rtol = 1e-12)
aint = integral(func, t1, t2)
@test isapprox(qint, aint, atol = 1e-6, rtol = 1e-8)
# integral(A, t)
qint, err = quadgk(func, t1, (t1 + t2) / 2; atol = 1e-12, rtol = 1e-12)
aint = integral(func, (t1 + t2) / 2)
@test isapprox(qint, aint, atol = 1e-6, rtol = 1e-8)
# integral(A, t1, t), integral(A, t, t2), integral(A, t, t)
ts = range(t1, t2; length = 100)
for t in ts
qint, err = quadgk(func, t1, t; atol = 1e-12, rtol = 1e-12)
aint1 = integral(func, t1, t)
@test isapprox(qint, aint1, atol = 1e-5, rtol = 1e-8)
aint2 = integral(func, t)
@test aint1 == aint2
qint, err = quadgk(func, t, t2; atol = 1e-12, rtol = 1e-12)
aint = integral(func, t, t2)
@test isapprox(qint, aint, atol = 1e-5, rtol = 1e-8)
aint = integral(func, t, t)
@test aint == 0.0
end
# integrals with extrapolation
qint, err = quadgk(func, t1 - 5.0, (t1 + t2) / 2; atol = 1e-12, rtol = 1e-12)
aint = integral(func, t1 - 5.0, (t1 + t2) / 2)
@test isapprox(qint, aint, atol = 1e-6, rtol = 1e-8)
qint, err = quadgk(func, (t1 + t2) / 2, t2 + 5.0; atol = 1e-12, rtol = 1e-12)
aint = integral(func, (t1 + t2) / 2, t2 + 5.0)
@test isapprox(qint, aint, atol = 1e-6, rtol = 1e-8)
end
func = method(args...; kwargs...)
@test_throws DataInterpolations.ExtrapolationError integral(func, t[1] - 1.0)
@test_throws DataInterpolations.ExtrapolationError integral(func, t[end] + 1.0)
@test_throws DataInterpolations.ExtrapolationError integral(func, t[1] - 1.0, t[2])
@test_throws DataInterpolations.ExtrapolationError integral(func, t[1], t[end] + 1.0)
end
@testset "LinearInterpolation" begin
u = 2.0collect(1:10)
t = 1.0collect(1:10)
test_integral(
LinearInterpolation; args = [u, t], name = "Linear Interpolation (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:100)
test_integral(LinearInterpolation; args = [u, t],
name = "Linear Interpolation (Vector) with random points")
end
@testset "QuadraticInterpolation" begin
u = [1.0, 4.0, 9.0, 16.0]
t = [1.0, 2.0, 3.0, 4.0]
test_integral(
QuadraticInterpolation; args = [u, t], name = "Quadratic Interpolation (Vector)")
u = [3.0, 0.0, 3.0, 0.0]
t = [1.0, 2.0, 3.0, 4.0]
test_integral(QuadraticInterpolation;
args = [u, t, :Backward],
name = "Quadratic Interpolation (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:10)
test_integral(QuadraticInterpolation; args = [u, t],
name = "Quadratic Interpolation (Vector) with random points")
end
@testset "LagrangeInterpolation" begin
u = [1.0, 4.0, 9.0]
t = [1.0, 2.0, 6.0]
A = LagrangeInterpolation(u, t)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 1.0, 2.0)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 5.0)
end
@testset "QuadraticSpline" begin
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
test_integral(QuadraticSpline; args = [u, t], name = "Quadratic Spline (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:100)
test_integral(
QuadraticSpline; args = [u, t], name = "Quadratic Spline (Vector) with random points")
end
@testset "CubicSpline" begin
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
test_integral(CubicSpline; args = [u, t], name = "Cubic Spline (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:100)
test_integral(
CubicSpline; args = [u, t], name = "Cubic Spline (Vector) with random points")
end
@testset "AkimaInterpolation" begin
u = [0.0, 2.0, 1.0, 3.0, 2.0, 6.0, 5.5, 5.5, 2.7, 5.1, 3.0]
t = collect(0.0:10.0)
test_integral(AkimaInterpolation; args = [u, t], name = "Akima Interpolation (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:100)
test_integral(
AkimaInterpolation; args = [u, t], name = "Akima Interpolation (Vector) with random points")
end
@testset "CubicHermiteSpline" begin
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
test_integral(CubicHermiteSpline; args = [du, u, t],
name = "Cubic Hermite Spline (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:100)
du = diff(u) ./ diff(t)
push!(du, 0)
test_integral(CubicHermiteSpline; args = [du, u, t],
name = "Cubic Hermite Spline (Vector) with random points")
end
@testset "QuinticHermiteSpline" begin
ddu = [0.0, -0.00033, 0.0051, -0.0067, 0.0029, 0.0]
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
test_integral(QuinticHermiteSpline; args = [ddu, du, u, t],
name = "Quintic Hermite Spline (Vector)")
u = round.(rand(100), digits = 5)
t = 1.0collect(1:100)
du = diff(u) ./ diff(t)
push!(du, 0)
ddu = diff(du) ./ diff(t)
push!(ddu, 0)
test_integral(QuinticHermiteSpline; args = [ddu, du, u, t],
name = "Quintic Hermite Spline (Vector) with random points")
end
@testset "RegularizationSmooth" begin
npts = 50
xmin = 0.0
xspan = 3 / 2 * π
x = collect(range(xmin, xmin + xspan, length = npts))
rng = StableRNG(655)
x = x + xspan / npts * (rand(rng, npts) .- 0.5)
# select a subset randomly
idx = unique(rand(rng, collect(eachindex(x)), 20))
t = x[unique(idx)]
npts = length(t)
ut = sin.(t)
stdev = 1e-1 * maximum(ut)
u = ut + stdev * randn(rng, npts)
# data must be ordered if t̂ is not provided
idx = sortperm(t)
tₒ = t[idx]
uₒ = u[idx]
test_integral(RegularizationSmooth;
args = [uₒ, tₒ],
kwargs = [:alg => :fixed],
name = "RegularizationSmooth")
end
@testset "Curvefit" begin
rng = StableRNG(12345)
model(x, p) = @. p[1] / (1 + exp(x - p[2]))
t = range(-10, stop = 10, length = 40)
u = model(t, [1.0, 2.0]) + 0.01 * randn(rng, length(t))
p0 = [0.5, 0.5]
A = Curvefit(u, t, model, p0, LBFGS())
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 0.0, 1.0)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 5.0)
end
@testset "BSplineInterpolation" begin
t = [0, 62.25, 109.66, 162.66, 205.8, 252.3]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
A = BSplineInterpolation(u, t, 2, :Uniform, :Uniform)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 1.0, 100.0)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 50.0)
end
@testset "BSplineApprox" begin
t = [0, 62.25, 109.66, 162.66, 205.8, 252.3]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
A = BSplineApprox(u, t, 2, 4, :Uniform, :Uniform)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 1.0, 100.0)
@test_throws DataInterpolations.IntegralNotFoundError integral(A, 50.0)
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 1665 | using DataInterpolations
using Symbolics
@testset "Interface" begin
u = 2.0collect(1:10)
t = 1.0collect(1:10)
A = LinearInterpolation(u, t)
for i in 1:10
@test u[i] == A.u[i]
end
for i in 1:10
@test t[i] == A.t[i]
end
end
@testset "Symbolics" begin
u = 2.0collect(1:10)
t = 1.0collect(1:10)
A = LinearInterpolation(u, t; extrapolate = true)
B = LinearInterpolation(u .^ 2, t; extrapolate = true)
@variables t x(t)
substitute(A(t), Dict(t => x))
t_val = 2.7
@test substitute(A(t), Dict(t => t_val)) == A(t_val)
@test substitute(B(A(t)), Dict(t => t_val)) == B(A(t_val))
@test substitute(A(B(A(t))), Dict(t => t_val)) == A(B(A(t_val)))
end
@testset "Type Inference" begin
u = 2.0collect(1:10)
t = 1.0collect(1:10)
methods = [
ConstantInterpolation, LinearInterpolation,
QuadraticInterpolation, LagrangeInterpolation,
QuadraticSpline, CubicSpline, AkimaInterpolation
]
@testset "$method" for method in methods
@inferred method(u, t)
end
@testset "BSplineInterpolation" begin
@inferred BSplineInterpolation(u, t, 3, :Uniform, :Uniform)
@inferred BSplineInterpolation(u, t, 3, :ArcLen, :Average)
end
@testset "BSplineApprox" begin
@inferred BSplineApprox(u, t, 3, 5, :Uniform, :Uniform)
@inferred BSplineApprox(u, t, 3, 5, :ArcLen, :Average)
end
du = ones(10)
ddu = zeros(10)
@testset "Hermite Splines" begin
@inferred CubicHermiteSpline(du, u, t)
@inferred PCHIPInterpolation(u, t)
@inferred QuinticHermiteSpline(ddu, du, u, t)
end
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 25722 | using DataInterpolations
using FindFirstFunctions: searchsortedfirstcorrelated
using StableRNGs
using Optim, ForwardDiff
using BenchmarkTools
function test_interpolation_type(T)
@test T <: DataInterpolations.AbstractInterpolation
@test hasfield(T, :u)
@test hasfield(T, :t)
@test hasfield(T, :extrapolate)
@test hasfield(T, :iguesser)
@test !isempty(methods(DataInterpolations._interpolate, (T, Any, Number)))
@test !isempty(methods(DataInterpolations._integral, (T, Any, Number)))
@test !isempty(methods(DataInterpolations._derivative, (T, Any, Number)))
end
function test_cached_index(A)
for t in range(first(A.t), last(A.t); length = 2 * length(A.t) - 1)
A(t)
idx = searchsortedfirstcorrelated(A.t, t, A.iguesser)
@test abs(A.iguesser.idx_prev[] -
searchsortedfirstcorrelated(A.t, t, A.iguesser)) <= 2
end
end
@testset "Linear Interpolation" begin
test_interpolation_type(LinearInterpolation)
for t in (1.0:10.0, 1.0collect(1:10))
u = 2.0collect(1:10)
#t = 1.0collect(1:10)
A = LinearInterpolation(u, t; extrapolate = true)
for (_t, _u) in zip(t, u)
@test A(_t) == _u
end
@test A(0) == 0.0
@test A(5.5) == 11.0
@test A(11) == 22
u = vcat(2.0collect(1:10)', 3.0collect(1:10)')
A = LinearInterpolation(u, t; extrapolate = true)
for (_t, _u) in zip(t, eachcol(u))
@test A(_t) == _u
end
@test A(0) == [0.0, 0.0]
@test A(5.5) == [11.0, 16.5]
@test A(11) == [22, 33]
x = 1:10
y = 2:4
u_ = x' .* y
u = [u_[:, i] for i in 1:size(u_, 2)]
A = LinearInterpolation(u, t; extrapolate = true)
@test A(0) == [0.0, 0.0, 0.0]
@test A(5.5) == [11.0, 16.5, 22.0]
@test A(11) == [22.0, 33.0, 44.0]
end
x = 1:10
y = 2:4
u_ = x' .* y
u = [u_[:, i:(i + 1)] for i in 1:2:10]
t = 1.0collect(2:2:10)
A = LinearInterpolation(u, t; extrapolate = true)
@test A(0) == [-2.0 0.0; -3.0 0.0; -4.0 0.0]
@test A(3) == [4.0 6.0; 6.0 9.0; 8.0 12.0]
@test A(5) == [8.0 10.0; 12.0 15.0; 16.0 20.0]
test_cached_index(A)
# with NaNs (#113)
u = [NaN, 1.0, 2.0, 3.0]
t = 1:4
A = LinearInterpolation(u, t; extrapolate = true)
@test isnan(A(1.0))
@test A(2.0) == 1.0
@test A(2.5) == 1.5
@test A(3.0) == 2.0
@test A(4.0) == 3.0
u = [0.0, NaN, 2.0, 3.0]
A = LinearInterpolation(u, t; extrapolate = true)
@test A(1.0) == 0.0
@test isnan(A(2.0))
@test isnan(A(2.5))
@test A(3.0) == 2.0
@test A(4.0) == 3.0
u = [0.0, 1.0, NaN, 3.0]
A = LinearInterpolation(u, t; extrapolate = true)
@test A(1.0) == 0.0
@test A(2.0) == 1.0
@test isnan(A(2.5))
@test isnan(A(3.0))
@test A(4.0) == 3.0
u = [0.0, 1.0, 2.0, NaN]
A = LinearInterpolation(u, t; extrapolate = true)
@test A(1.0) == 0.0
@test A(2.0) == 1.0
@test A(3.0) == 2.0
@test isnan(A(3.5))
@test isnan(A(4.0))
# Test type stability
u = Float32.(1:5)
t = Float32.(1:5)
A1 = LinearInterpolation(u, t; extrapolate = true)
u = 1:5
t = 1:5
A2 = LinearInterpolation(u, t; extrapolate = true)
u = [1 // i for i in 1:5]
t = (1:5)
A3 = LinearInterpolation(u, t; extrapolate = true)
u = [1 // i for i in 1:5]
t = [1 // (6 - i) for i in 1:5]
A4 = LinearInterpolation(u, t; extrapolate = true)
F32 = Float32(1)
F64 = Float64(1)
I32 = Int32(1)
I64 = Int64(1)
R32 = Int32(1) // Int32(1)
R64 = 1 // 1
for A in Any[A1, A2, A3, A4]
@test @inferred(A(F32)) === A(F32)
@test @inferred(A(F64)) === A(F64)
@test @inferred(A(I32)) === A(I32)
@test @inferred(A(I64)) === A(I64)
@test @inferred(A(R32)) === A(R32)
@test @inferred(A(R64)) === A(R64)
end
# Nan time value:
t = 0.0:3 # Floats
u = [0, -2, -1, -2]
A = LinearInterpolation(u, t; extrapolate = true)
dA = t -> ForwardDiff.derivative(A, t)
@test isnan(dA(NaN))
t = 0:3 # Integers
u = [0, -2, -1, -2]
A = LinearInterpolation(u, t; extrapolate = true)
dA = t -> ForwardDiff.derivative(A, t)
@test isnan(dA(NaN))
# Test derivative at point gives derivative to the right (except last is to left):
ts = t[begin:(end - 1)]
@test dA.(ts) == dA.(ts .+ 0.5)
# Test last derivative is to the left:
@test dA(last(t)) == dA(last(t) - 0.5)
# Test array-valued interpolation
u = collect.(2.0collect(1:10))
t = 1.0collect(1:10)
A = LinearInterpolation(u, t; extrapolate = true)
@test A(0) == fill(0.0)
@test A(5.5) == fill(11.0)
@test A(11) == fill(22)
# Test constant -Inf interpolation
u = [-Inf, -Inf]
t = [0.0, 1.0]
A = LinearInterpolation(u, t)
@test A(0.0) == -Inf
@test A(0.5) == -Inf
# Test extrapolation
u = 2.0collect(1:10)
t = 1.0collect(1:10)
A = LinearInterpolation(u, t; extrapolate = true)
@test A(-1.0) == -2.0
@test A(11.0) == 22.0
A = LinearInterpolation(u, t)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(11.0)
@test_throws DataInterpolations.ExtrapolationError A([-1.0, 11.0])
end
@testset "Quadratic Interpolation" begin
test_interpolation_type(QuadraticInterpolation)
u = [1.0, 4.0, 9.0, 16.0]
t = [1.0, 2.0, 3.0, 4.0]
A = QuadraticInterpolation(u, t; extrapolate = true)
for (_t, _u) in zip(t, u)
@test A(_t) == _u
end
@test A(0.0) == 0.0
@test A(1.5) == 2.25
@test A(2.5) == 6.25
@test A(3.5) == 12.25
@test A(5.0) == 25
test_cached_index(A)
# backward-looking interpolation
u = [1.0, 4.0, 9.0, 16.0]
t = [1.0, 2.0, 3.0, 4.0]
A = QuadraticInterpolation(u, t, :Backward; extrapolate = true)
for (_t, _u) in zip(t, u)
@test A(_t) == _u
end
@test A(0.0) == 0.0
@test A(1.5) == 2.25
@test A(2.5) == 6.25
@test A(3.5) == 12.25
@test A(5.0) == 25
test_cached_index(A)
# Test both forward and backward-looking quadratic interpolation
u = [1.0, 4.5, 6.0, 2.0]
t = [1.0, 2.0, 3.0, 4.0]
A_f = QuadraticInterpolation(u, t, :Forward)
A_b = QuadraticInterpolation(u, t, :Backward)
for (_t, _u) in zip(t, u)
@test A_f(_t) == _u
@test A_b(_t) == _u
end
l₀, l₁, l₂ = 0.375, 0.75, -0.125
# In the first subinterval they're the same (no other option)
@test A_f(1.5) ≈ l₀ * u[1] + l₁ * u[2] + l₂ * u[3]
@test A_b(1.5) ≈ l₀ * u[1] + l₁ * u[2] + l₂ * u[3]
# In the second subinterval they should be different
@test A_f(2.5) ≈ l₀ * u[2] + l₁ * u[3] + l₂ * u[4]
@test A_b(2.5) ≈ l₂ * u[1] + l₁ * u[2] + l₀ * u[3]
# In the last subinterval they should be the same again
@test A_f(3.5) ≈ l₂ * u[2] + l₁ * u[3] + l₀ * u[4]
@test A_b(3.5) ≈ l₂ * u[2] + l₁ * u[3] + l₀ * u[4]
test_cached_index(A_f)
test_cached_index(A_b)
# Matrix interpolation test
u = [1.0 4.0 9.0 16.0; 1.0 4.0 9.0 16.0]
A = QuadraticInterpolation(u, t; extrapolate = true)
for (_t, _u) in zip(t, eachcol(u))
@test A(_t) == _u
end
@test A(0.0) == [0.0, 0.0]
@test A(1.5) == [2.25, 2.25]
@test A(2.5) == [6.25, 6.25]
@test A(3.5) == [12.25, 12.25]
@test A(5.0) == [25.0, 25.0]
u_ = [1.0, 4.0, 9.0, 16.0]' .* ones(5)
u = [u_[:, i] for i in 1:size(u_, 2)]
A = QuadraticInterpolation(u, t; extrapolate = true)
@test A(0) == zeros(5)
@test A(1.5) == 2.25 * ones(5)
@test A(2.5) == 6.25 * ones(5)
@test A(3.5) == 12.25 * ones(5)
@test A(5.0) == 25.0 * ones(5)
u = [repeat(u[i], 1, 3) for i in 1:4]
A = QuadraticInterpolation(u, t; extrapolate = true)
@test A(0) == zeros(5, 3)
@test A(1.5) == 2.25 * ones(5, 3)
@test A(2.5) == 6.25 * ones(5, 3)
@test A(3.5) == 12.25 * ones(5, 3)
@test A(5.0) == 25.0 * ones(5, 3)
# Test extrapolation
u = [1.0, 4.5, 6.0, 2.0]
t = [1.0, 2.0, 3.0, 4.0]
A = QuadraticInterpolation(u, t; extrapolate = true)
@test A(0.0) == -4.5
@test A(5.0) == -7.5
A = QuadraticInterpolation(u, t)
@test_throws DataInterpolations.ExtrapolationError A(0.0)
@test_throws DataInterpolations.ExtrapolationError A(5.0)
end
@testset "Lagrange Interpolation" begin
test_interpolation_type(LagrangeInterpolation)
u = [1.0, 4.0, 9.0]
t = [1.0, 2.0, 3.0]
A = LagrangeInterpolation(u, t)
@test A(2.0) == 4.0
@test A(1.5) == 2.25
u = [1.0, 8.0, 27.0, 64.0]
t = [1.0, 2.0, 3.0, 4.0]
A = LagrangeInterpolation(u, t)
@test A(2.0) == 8.0
@test A(1.5) ≈ 3.375
@test A(3.5) ≈ 42.875
u = [1.0 4.0 9.0 16.0; 1.0 4.0 9.0 16.0]
A = LagrangeInterpolation(u, t)
@test A(2.0) == [4.0, 4.0]
@test A(1.5) ≈ [2.25, 2.25]
@test A(3.5) ≈ [12.25, 12.25]
u_ = [1.0, 4.0, 9.0]' .* ones(4)
u = [u_[:, i] for i in 1:size(u_, 2)]
t = [1.0, 2.0, 3.0]
A = LagrangeInterpolation(u, t)
@test A(2.0) == 4.0 * ones(4)
@test A(1.5) == 2.25 * ones(4)
u_ = [1.0, 8.0, 27.0, 64.0]' .* ones(4)
u = [u_[:, i] for i in 1:size(u_, 2)]
t = [1.0, 2.0, 3.0, 4.0]
A = LagrangeInterpolation(u, t)
@test A(2.0) == 8.0 * ones(4)
@test A(1.5) ≈ 3.375 * ones(4)
@test A(3.5) ≈ 42.875 * ones(4)
u = [repeat(u[i], 1, 3) for i in 1:4]
A = LagrangeInterpolation(u, t)
@test A(2.0) == 8.0 * ones(4, 3)
@test A(1.5) ≈ 3.375 * ones(4, 3)
@test A(3.5) ≈ 42.875 * ones(4, 3)
# Test extrapolation
u = [1.0, 4.0, 9.0]
t = [1.0, 2.0, 3.0]
A = LagrangeInterpolation(u, t; extrapolate = true)
@test A(0.0) == 0.0
@test A(4.0) == 16.0
A = LagrangeInterpolation(u, t)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(4.0)
end
@testset "Akima Interpolation" begin
test_interpolation_type(AkimaInterpolation)
u = [0.0, 2.0, 1.0, 3.0, 2.0, 6.0, 5.5, 5.5, 2.7, 5.1, 3.0]
t = collect(0.0:10.0)
A = AkimaInterpolation(u, t)
@test A(0.0) ≈ 0.0
@test A(0.5) ≈ 1.375
@test A(1.0) ≈ 2.0
@test A(1.5) ≈ 1.5
@test A(2.5) ≈ 1.953125
@test A(3.5) ≈ 2.484375
@test A(4.5) ≈ 4.1363636363636366866103344
@test A(5.1) ≈ 5.9803623910336236590978842
@test A(6.5) ≈ 5.5067291516462386624652936
@test A(7.2) ≈ 5.2031367459745245795943447
@test A(8.6) ≈ 4.1796554159017080820603951
@test A(9.9) ≈ 3.4110386597938129327189927
@test A(10.0) ≈ 3.0
test_cached_index(A)
# Test extrapolation
A = AkimaInterpolation(u, t; extrapolate = true)
@test A(-1.0) ≈ -5.0
@test A(11.0) ≈ -3.924742268041234
A = AkimaInterpolation(u, t)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(11.0)
end
@testset "ConstantInterpolation" begin
test_interpolation_type(ConstantInterpolation)
t = [1.0, 2.0, 3.0, 4.0]
@testset "Vector case" for u in [[1.0, 2.0, 0.0, 1.0], ["B", "C", "A", "B"]]
A = ConstantInterpolation(u, t, dir = :right; extrapolate = true)
@test A(0.5) == u[1]
@test A(1.0) == u[1]
@test A(1.5) == u[2]
@test A(2.0) == u[2]
@test A(2.5) == u[3]
@test A(3.0) == u[3]
@test A(3.5) == u[1]
@test A(4.0) == u[1]
@test A(4.5) == u[1]
test_cached_index(A)
A = ConstantInterpolation(u, t; extrapolate = true) # dir=:left is default
@test A(0.5) == u[1]
@test A(1.0) == u[1]
@test A(1.5) == u[1]
@test A(2.0) == u[2]
@test A(2.5) == u[2]
@test A(3.0) == u[3]
@test A(3.5) == u[3]
@test A(4.0) == u[1]
@test A(4.5) == u[1]
test_cached_index(A)
end
@testset "Matrix case" for u in [
[1.0 2.0 0.0 1.0; 1.0 2.0 0.0 1.0],
["B" "C" "A" "B"; "B" "C" "A" "B"]
]
A = ConstantInterpolation(u, t, dir = :right; extrapolate = true)
@test A(0.5) == u[:, 1]
@test A(1.0) == u[:, 1]
@test A(1.5) == u[:, 2]
@test A(2.0) == u[:, 2]
@test A(2.5) == u[:, 3]
@test A(3.0) == u[:, 3]
@test A(3.5) == u[:, 1]
@test A(4.0) == u[:, 1]
@test A(4.5) == u[:, 1]
test_cached_index(A)
A = ConstantInterpolation(u, t; extrapolate = true) # dir=:left is default
@test A(0.5) == u[:, 1]
@test A(1.0) == u[:, 1]
@test A(1.5) == u[:, 1]
@test A(2.0) == u[:, 2]
@test A(2.5) == u[:, 2]
@test A(3.0) == u[:, 3]
@test A(3.5) == u[:, 3]
@test A(4.0) == u[:, 1]
@test A(4.5) == u[:, 1]
test_cached_index(A)
end
@testset "Vector of Vectors case" for u in [
[[1.0, 2.0], [0.0, 1.0], [1.0, 2.0], [0.0, 1.0]],
[["B", "C"], ["A", "B"], ["B", "C"], ["A", "B"]]]
A = ConstantInterpolation(u, t, dir = :right; extrapolate = true)
@test A(0.5) == u[1]
@test A(1.0) == u[1]
@test A(1.5) == u[2]
@test A(2.0) == u[2]
@test A(2.5) == u[3]
@test A(3.0) == u[3]
@test A(3.5) == u[4]
@test A(4.0) == u[4]
@test A(4.5) == u[4]
test_cached_index(A)
A = ConstantInterpolation(u, t; extrapolate = true) # dir=:left is default
@test A(0.5) == u[1]
@test A(1.0) == u[1]
@test A(1.5) == u[1]
@test A(2.0) == u[2]
@test A(2.5) == u[2]
@test A(3.0) == u[3]
@test A(3.5) == u[3]
@test A(4.0) == u[4]
@test A(4.5) == u[4]
test_cached_index(A)
end
@testset "Vector of Matrices case" for u in [
[[1.0 2.0; 1.0 2.0], [0.0 1.0; 0.0 1.0], [1.0 2.0; 1.0 2.0], [0.0 1.0; 0.0 1.0]],
[["B" "C"; "B" "C"], ["A" "B"; "A" "B"], ["B" "C"; "B" "C"], ["A" "B"; "A" "B"]]]
A = ConstantInterpolation(u, t, dir = :right; extrapolate = true)
@test A(0.5) == u[1]
@test A(1.0) == u[1]
@test A(1.5) == u[2]
@test A(2.0) == u[2]
@test A(2.5) == u[3]
@test A(3.0) == u[3]
@test A(3.5) == u[4]
@test A(4.0) == u[4]
@test A(4.5) == u[4]
test_cached_index(A)
A = ConstantInterpolation(u, t; extrapolate = true) # dir=:left is default
@test A(0.5) == u[1]
@test A(1.0) == u[1]
@test A(1.5) == u[1]
@test A(2.0) == u[2]
@test A(2.5) == u[2]
@test A(3.0) == u[3]
@test A(3.5) == u[3]
@test A(4.0) == u[4]
@test A(4.5) == u[4]
test_cached_index(A)
end
# Test extrapolation
u = [1.0, 2.0, 0.0, 1.0]
A = ConstantInterpolation(u, t; extrapolate = true)
@test A(-1.0) == 1.0
@test A(11.0) == 1.0
A = ConstantInterpolation(u, t)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(11.0)
# Test extrapolation with infs with regularly spaced t
u = [1.67e7, 1.6867e7, 1.7034e7, 1.7201e7, 1.7368e7]
t = [0.0, 0.1, 0.2, 0.3, 0.4]
A = ConstantInterpolation(u, t; extrapolate = true)
@test A(Inf) == last(u)
@test A(-Inf) == first(u)
end
@testset "QuadraticSpline Interpolation" begin
test_interpolation_type(QuadraticSpline)
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
A = QuadraticSpline(u, t; extrapolate = true)
# Solution
P₁ = x -> (x + 1)^2 # for x ∈ [-1, 0]
P₂ = x -> 2 * x + 1 # for x ∈ [ 0, 1]
for (_t, _u) in zip(t, u)
@test A(_t) == _u
end
@test A(-2.0) == P₁(-2.0)
@test A(-0.5) == P₁(-0.5)
@test A(0.7) == P₂(0.7)
@test A(2.0) == P₂(2.0)
test_cached_index(A)
u_ = [0.0, 1.0, 3.0]' .* ones(4)
u = [u_[:, i] for i in 1:size(u_, 2)]
A = QuadraticSpline(u, t; extrapolate = true)
@test A(-2.0) == P₁(-2.0) * ones(4)
@test A(-0.5) == P₁(-0.5) * ones(4)
@test A(0.7) == P₂(0.7) * ones(4)
@test A(2.0) == P₂(2.0) * ones(4)
u = [repeat(u[i], 1, 3) for i in 1:3]
A = QuadraticSpline(u, t; extrapolate = true)
@test A(-2.0) == P₁(-2.0) * ones(4, 3)
@test A(-0.5) == P₁(-0.5) * ones(4, 3)
@test A(0.7) == P₂(0.7) * ones(4, 3)
@test A(2.0) == P₂(2.0) * ones(4, 3)
# Test extrapolation
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
A = QuadraticSpline(u, t; extrapolate = true)
@test A(-2.0) == 1.0
@test A(2.0) == 5.0
A = QuadraticSpline(u, t)
@test_throws DataInterpolations.ExtrapolationError A(-2.0)
@test_throws DataInterpolations.ExtrapolationError A(2.0)
end
@testset "CubicSpline Interpolation" begin
test_interpolation_type(CubicSpline)
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
A = CubicSpline(u, t; extrapolate = true)
test_cached_index(A)
# Solution
P₁ = x -> 1 + 1.5x + 0.75 * x^2 + 0.25 * x^3 # for x ∈ [-1.0, 0.0]
P₂ = x -> 1 + 1.5x + 0.75 * x^2 - 0.25 * x^3 # for x ∈ [0.0, 1.0]
for (_t, _u) in zip(t, u)
@test A(_t) == _u
end
for x in (-1.5, -0.5, -0.7)
@test A(x) ≈ P₁(x)
end
for x in (0.3, 0.5, 1.5)
@test A(x) ≈ P₂(x)
end
u_ = [0.0, 1.0, 3.0]' .* ones(4)
u = [u_[:, i] for i in 1:size(u_, 2)]
A = CubicSpline(u, t; extrapolate = true)
for x in (-1.5, -0.5, -0.7)
@test A(x) ≈ P₁(x) * ones(4)
end
for x in (0.3, 0.5, 1.5)
@test A(x) ≈ P₂(x) * ones(4)
end
u = [repeat(u[i], 1, 3) for i in 1:3]
A = CubicSpline(u, t; extrapolate = true)
for x in (-1.5, -0.5, -0.7)
@test A(x) ≈ P₁(x) * ones(4, 3)
end
for x in (0.3, 0.5, 1.5)
@test A(x) ≈ P₂(x) * ones(4, 3)
end
# Test extrapolation
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
A = CubicSpline(u, t; extrapolate = true)
@test A(-2.0) ≈ -1.0
@test A(2.0) ≈ 5.0
A = CubicSpline(u, t)
@test_throws DataInterpolations.ExtrapolationError A(-2.0)
@test_throws DataInterpolations.ExtrapolationError A(2.0)
end
@testset "BSplines" begin
# BSpline Interpolation and Approximation
t = [0, 62.25, 109.66, 162.66, 205.8, 252.3]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
@testset "BSplineInterpolation" begin
test_interpolation_type(BSplineInterpolation)
A = BSplineInterpolation(u, t, 2, :Uniform, :Uniform)
@test [A(25.0), A(80.0)] == [13.454197730061425, 10.305633616059845]
@test [A(190.0), A(225.0)] == [14.07428439395079, 11.057784141519251]
@test [A(t[1]), A(t[end])] == [u[1], u[end]]
test_cached_index(A)
# Test extrapolation
A = BSplineInterpolation(u, t, 2, :Uniform, :Uniform; extrapolate = true)
@test A(-1.0) == u[1]
@test A(300.0) == u[end]
A = BSplineInterpolation(u, t, 2, :Uniform, :Uniform)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(300.0)
A = BSplineInterpolation(u, t, 2, :ArcLen, :Average)
@test [A(25.0), A(80.0)] == [13.363814458968486, 10.685201117692609]
@test [A(190.0), A(225.0)] == [13.437481084762863, 11.367034741256463]
@test [A(t[1]), A(t[end])] == [u[1], u[end]]
@test_throws ErrorException("BSplineInterpolation needs at least d + 1, i.e. 4 points.") BSplineInterpolation(
u[1:3], t[1:3], 3, :Uniform, :Uniform)
@test_throws ErrorException("BSplineInterpolation needs at least d + 1, i.e. 5 points.") BSplineInterpolation(
u[1:4], t[1:4], 4, :ArcLen, :Average)
@test_nowarn BSplineInterpolation(u[1:3], t[1:3], 2, :Uniform, :Uniform)
# Test extrapolation
A = BSplineInterpolation(u, t, 2, :ArcLen, :Average; extrapolate = true)
@test A(-1.0) == u[1]
@test A(300.0) == u[end]
A = BSplineInterpolation(u, t, 2, :ArcLen, :Average)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(300.0)
end
@testset "BSplineApprox" begin
test_interpolation_type(BSplineApprox)
A = BSplineApprox(u, t, 2, 4, :Uniform, :Uniform)
@test [A(25.0), A(80.0)] ≈ [12.979802931218234, 10.914310609953178]
@test [A(190.0), A(225.0)] ≈ [13.851245975109263, 12.963685868886575]
@test [A(t[1]), A(t[end])] ≈ [u[1], u[end]]
test_cached_index(A)
@test_throws ErrorException("BSplineApprox needs at least d + 1, i.e. 3 control points.") BSplineApprox(
u, t, 2, 2, :Uniform, :Uniform)
@test_throws ErrorException("BSplineApprox needs at least d + 1, i.e. 4 control points.") BSplineApprox(
u, t, 3, 3, :ArcLen, :Average)
@test_nowarn BSplineApprox(u, t, 2, 3, :Uniform, :Uniform)
# Test extrapolation
A = BSplineApprox(u, t, 2, 4, :Uniform, :Uniform; extrapolate = true)
@test A(-1.0) == u[1]
@test A(300.0) == u[end]
A = BSplineApprox(u, t, 2, 4, :Uniform, :Uniform)
@test_throws DataInterpolations.ExtrapolationError A(-1.0)
@test_throws DataInterpolations.ExtrapolationError A(300.0)
end
end
@testset "Cubic Hermite Spline" begin
test_interpolation_type(CubicHermiteSpline)
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
A = CubicHermiteSpline(du, u, t; extrapolate = true)
@test A.(t) ≈ u
@test A(100.0)≈10.106770 rtol=1e-5
@test A(300.0)≈9.901542 rtol=1e-5
test_cached_index(A)
push!(u, 1.0)
@test_throws AssertionError CubicHermiteSpline(du, u, t)
end
@testset "PCHIPInterpolation" begin
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 250.0]
A = PCHIPInterpolation(u, t)
@test A isa CubicHermiteSpline
ts = 0.0:0.1:250.0
us = A(ts)
@test all(minimum(u) .<= us)
@test all(maximum(u) .>= us)
@test all(A.du[3:4] .== 0.0)
end
@testset "Quintic Hermite Spline" begin
test_interpolation_type(QuinticHermiteSpline)
ddu = [0.0, -0.00033, 0.0051, -0.0067, 0.0029, 0.0]
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
A = QuinticHermiteSpline(ddu, du, u, t; extrapolate = true)
@test A.(t) ≈ u
@test A(100.0)≈10.107996 rtol=1e-5
@test A(300.0)≈11.364162 rtol=1e-5
test_cached_index(A)
push!(u, 1.0)
@test_throws AssertionError QuinticHermiteSpline(ddu, du, u, t)
end
@testset "Curvefit" begin
# Curvefit Interpolation
rng = StableRNG(12345)
model(x, p) = @. p[1] / (1 + exp(x - p[2]))
t = range(-10, stop = 10, length = 40)
u = model(t, [1.0, 2.0]) + 0.01 * randn(rng, length(t))
p0 = [0.5, 0.5]
A = Curvefit(u, t, model, p0, LBFGS())
ts = [-7.0, -2.0, 0.0, 2.5, 5.0]
vs = [
1.0013468217936277,
0.9836755196317837,
0.8833959853995836,
0.3810348276782708,
0.048062978598861855
]
us = A.(ts)
@test vs ≈ us
# Test extrapolation
A = Curvefit(u, t, model, p0, LBFGS(); extrapolate = true)
@test A(15.0) == model(15.0, A.pmin)
A = Curvefit(u, t, model, p0, LBFGS())
@test_throws DataInterpolations.ExtrapolationError A(15.0)
end
@testset "Type of vector returned" begin
# Issue https://github.com/SciML/DataInterpolations.jl/issues/253
t1 = Float32[0.1, 0.2, 0.3, 0.4, 0.5]
t2 = Float64[0.1, 0.2, 0.3, 0.4, 0.5]
interps_and_types = [
(LinearInterpolation(t1, t1), Float32),
(LinearInterpolation(t1, t2), Float32),
(LinearInterpolation(t2, t1), Float64),
(LinearInterpolation(t2, t2), Float64)
]
for i in eachindex(interps_and_types)
@test eltype(interps_and_types[i][1](t1)) == interps_and_types[i][2]
end
end
@testset "Plugging vector timepoints" begin
# Issue https://github.com/SciML/DataInterpolations.jl/issues/267
t = Float64[1.0, 2.0, 3.0, 4.0, 5.0]
@testset "utype - Vectors" begin
interp = LinearInterpolation(rand(5), t)
@test interp(t) isa Vector{Float64}
end
@testset "utype - Vector of Vectors" begin
interp = LinearInterpolation([rand(2) for _ in 1:5], t)
@test interp(t) isa Vector{Vector{Float64}}
end
@testset "utype - Matrix" begin
interp = LinearInterpolation(rand(2, 5), t)
@test interp(t) isa Matrix{Float64}
end
end
# missing values handling tests
u = [1.0, 4.0, 9.0, 16.0, 25.0, missing, missing]
t = [1.0, 2.0, 3.0, 4.0, missing, 6.0, missing]
A = QuadraticInterpolation(u, t)
@test A(2.0) == 4.0
@test A(1.5) == 2.25
@test A(3.5) == 12.25
@test A(2.5) == 6.25
u = copy(hcat(u, u)')
A = QuadraticInterpolation(u, t)
@test A(2.0) == [4.0, 4.0]
@test A(1.5) == [2.25, 2.25]
@test A(3.5) == [12.25, 12.25]
@test A(2.5) == [6.25, 6.25]
# ForwardDiff compatibility with respect to coefficients
function square(INTERPOLATION_TYPE, c) # elaborate way to write f(x) = x²
xs = -4.0:2.0:4.0
ys = [c^2 + x for x in xs]
itp = INTERPOLATION_TYPE(ys, xs)
return itp(0.0)
end
# generate versions of this function with different interpolators
f_quadratic_spline = c -> square(QuadraticSpline, c)
f_cubic_spline = c -> square(CubicSpline, c)
@test ForwardDiff.derivative(f_quadratic_spline, 2.0) ≈ 4.0
@test ForwardDiff.derivative(f_quadratic_spline, 4.0) ≈ 8.0
@test ForwardDiff.derivative(f_cubic_spline, 2.0) ≈ 4.0
@test ForwardDiff.derivative(f_cubic_spline, 4.0) ≈ 8.0
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 1087 | using DataInterpolations
t1 = [1.0, 2.0, 3.0]
u1 = [0.0, 1.0, 0.0]
t2 = [4.0, 5.0, 6.0]
u2 = [1.0, 2.0, 1.0]
ts_append = 1.0:0.5:6.0
ts_push = 1.0:0.5:4.0
@testset "$method" for method in [
LinearInterpolation, QuadraticInterpolation, ConstantInterpolation]
func1 = method(copy(u1), copy(t1); cache_parameters = true)
append!(func1, u2, t2)
func2 = method(vcat(u1, u2), vcat(t1, t2); cache_parameters = true)
@test func1.u == func2.u
@test func1.t == func2.t
for name in propertynames(func1.p)
@test getfield(func1.p, name) == getfield(func2.p, name)
end
@test func1(ts_append) == func2(ts_append)
@test func1.I == func2.I
func1 = method(copy(u1), copy(t1); cache_parameters = true)
push!(func1, 1.0, 4.0)
func2 = method(vcat(u1, 1.0), vcat(t1, 4.0); cache_parameters = true)
@test func1.u == func2.u
@test func1.t == func2.t
for name in propertynames(func1.p)
@test getfield(func1.p, name) == getfield(func2.p, name)
end
@test func1(ts_push) == func2(ts_push)
@test func1.I == func2.I
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 1640 | using DataInterpolations
@testset "Linear Interpolation" begin
u = [1.0, 5.0, 3.0, 4.0, 4.0]
t = collect(1:5)
A = LinearInterpolation(u, t; cache_parameters = true)
@test A.p.slope ≈ [4.0, -2.0, 1.0, 0.0]
end
@testset "Quadratic Interpolation" begin
u = [1.0, 5.0, 3.0, 4.0, 4.0]
t = collect(1:5)
A = QuadraticInterpolation(u, t; cache_parameters = true)
@test A.p.l₀ ≈ [0.5, 2.5, 1.5]
@test A.p.l₁ ≈ [-5.0, -3.0, -4.0]
@test A.p.l₂ ≈ [1.5, 2.0, 2.0]
end
@testset "Quadratic Spline" begin
u = [1.0, 5.0, 3.0, 4.0, 4.0]
t = collect(1:5)
A = QuadraticSpline(u, t; cache_parameters = true)
@test A.p.σ ≈ [4.0, -10.0, 13.0, -14.0]
end
@testset "Cubic Spline" begin
u = [1, 5, 3, 4, 4]
t = collect(1:5)
A = CubicSpline(u, t; cache_parameters = true)
@test A.p.c₁ ≈ [6.839285714285714, 1.642857142857143, 4.589285714285714, 4.0]
@test A.p.c₂ ≈ [1.0, 6.839285714285714, 1.642857142857143, 4.589285714285714]
end
@testset "Cubic Hermite Spline" begin
du = [5.0, 3.0, 6.0, 8.0, 1.0]
u = [1.0, 5.0, 3.0, 4.0, 4.0]
t = collect(1:5)
A = CubicHermiteSpline(du, u, t; cache_parameters = true)
@test A.p.c₁ ≈ [-1.0, -5.0, -5.0, -8.0]
@test A.p.c₂ ≈ [0.0, 13.0, 12.0, 9.0]
end
@testset "Quintic Hermite Spline" begin
ddu = [0.0, 3.0, 6.0, 4.0, 5.0]
du = [5.0, 3.0, 6.0, 8.0, 1.0]
u = [1.0, 5.0, 3.0, 4.0, 4.0]
t = collect(1:5)
A = QuinticHermiteSpline(ddu, du, u, t; cache_parameters = true)
@test A.p.c₁ ≈ [-1.0, -6.5, -8.0, -10.0]
@test A.p.c₂ ≈ [1.0, 19.5, 20.0, 19.0]
@test A.p.c₃ ≈ [1.5, -37.5, -37.0, -26.5]
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 460 | using DataInterpolations, Aqua
@testset "Aqua" begin
Aqua.find_persistent_tasks_deps(DataInterpolations)
Aqua.test_ambiguities(DataInterpolations, recursive = false)
Aqua.test_deps_compat(DataInterpolations)
Aqua.test_piracies(DataInterpolations)
Aqua.test_project_extras(DataInterpolations)
Aqua.test_stale_deps(DataInterpolations)
Aqua.test_unbound_args(DataInterpolations)
Aqua.test_undefined_exports(DataInterpolations)
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 7604 | using DataInterpolations
import StableRNGs: StableRNG
using RegularizationTools
# create scattered data
npts = 50
xmin = 0.0
xspan = 3 / 2 * π
x = collect(range(xmin, xmin + xspan, length = npts))
rng = StableRNG(655)
x = x + xspan / npts * (rand(rng, npts) .- 0.5)
# select a subset randomly
idx = unique(rand(rng, collect(eachindex(x)), 20))
t = x[unique(idx)]
npts = length(t)
ut = sin.(t)
stdev = 1e-1 * maximum(ut)
u = ut + stdev * randn(rng, npts)
# data must be ordered if t̂ is not provided
idx = sortperm(t)
tₒ = t[idx]
uₒ = u[idx]
tolerance = 1e-3
@testset "Direct smoothing" begin
# fixed with default λ = 1.0
A = RegularizationSmooth(uₒ, tₒ; alg = :fixed)
ans = [0.6456173647252937 0.663974701324226 0.7631218523665086 0.778654700697601 0.7489958320589535 0.7319087707475104 0.6807082599508811 0.6372557895089508 0.5832859790765743 0.5021274805916013 0.3065928203396211 0.1353332321156384 -0.3260000640060584 -0.6557906092739154 -0.9204882447932498]'
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(tₒ), ans, rtol = tolerance)
# non-default d and λ
A = RegularizationSmooth(uₒ, tₒ, 4; λ = 1e-2, alg = :fixed)
ans = [0.19865190868740357 0.2885349151737291 0.6756699442978945 0.9165887141895426 0.9936113717653254 1.0042825002191034 0.9768118192829827 0.9184595331808411 0.8214983284892922 0.6538356458824783 0.28295521578898 0.018060767871253963 -0.5301723647977373 -0.8349855890541111 -1.1085048455468356]'
@test isapprox(A.û, ans, rtol = tolerance)
# GCV (default) to determine λ
A = RegularizationSmooth(uₒ, tₒ)
@test isapprox(A.λ, 0.12788440382063268, rtol = tolerance)
ans = [0.21974931848164914 0.2973284508009968 0.6908546278415386 0.9300465474303226 0.9741453042418977 0.9767572556868123 0.9432951659303452 0.8889834700087442 0.804842790047182 0.6603217445567791 0.30341652659101737 0.05924456463634589 -0.5239939779242144 -0.8421768233191822 -1.107517099580091]'
@test isapprox(A.û, ans, rtol = tolerance)
# L-curve to determine λ
A = RegularizationSmooth(uₒ, tₒ; alg = :L_curve)
@test isapprox(A.λ, 0.9536286111306728, rtol = tolerance)
ans = [
0.6261657429321232,
0.6470204841904836,
0.7599270022828396,
0.7835725197598925,
0.7567105872094757,
0.7404815750685363,
0.6906841961987067,
0.647628105931872,
0.5937273796308717,
0.512087780658067,
0.3136272387739983,
0.1392761732695201,
-0.3312498167413961,
-0.6673268474631847,
-0.9370342562716745
]
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(tₒ), ans, rtol = tolerance)
end
@testset "Smoothing with weights" begin
# midpoint rule integration
A = RegularizationSmooth(uₒ, tₒ, nothing, :midpoint)
@test isapprox(A.λ, 0.10787235405005478, rtol = tolerance)
ans = [
0.3068904607028622,
0.3637388879266782,
0.6654462500501238,
0.9056440536733456,
0.9738150157541853,
0.9821315604309402,
0.9502526946446999,
0.8953643918063283,
0.8024431779821514,
0.6415812230114304,
0.2834706832220367,
0.05281575111822609,
-0.5333542714497277,
-0.8406745098604134,
-1.0983391396173634
]
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(tₒ), ans, rtol = tolerance)
# arbitrary weights for wls (and fixed λ, GCV not working well for some of these)
A = RegularizationSmooth(uₒ, tₒ, nothing, collect(1:npts); λ = 1e-1, alg = :fixed)
ans = [
0.24640196218427968,
0.3212059975226125,
0.6557626475144205,
0.9222911426465459,
0.9913331910731215,
1.0072241662103494,
0.9757899817730779,
0.935880516370941,
0.8381074902073471,
0.6475589703422522,
0.2094170714475404,
0.09102085384961625,
-0.5640882848240228,
-0.810519277110118,
-1.1159124134900906
]
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(tₒ), ans, rtol = tolerance)
# arbitrary weights for wls and wr
nhalf = Int(floor(npts / 2))
wls = vcat(ones(nhalf), 10 * ones(npts - nhalf))
wr = collect(1:(npts - 2))
A = RegularizationSmooth(uₒ, tₒ, nothing, wls, wr; λ = 1e-1, alg = :fixed)
ans = [
0.21878709713242372,
0.3118480645325099,
0.7669822464946172,
1.0232343854914931,
1.0526513115274412,
1.0469579284244412,
0.9962426294084775,
0.9254407155702626,
0.8204764044515936,
0.6514510142804217,
0.27796896299068763,
0.04756024028728636,
-0.5301034620974782,
-0.8408107101140526,
-1.1058428573417736
]
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(tₒ), ans, rtol = tolerance)
end
@testset "Smoothing with t̂ provided" begin
N̂ = 20
t̂ = collect(range(xmin, xmin + xspan, length = N̂))
# with t̂, no weights
A = RegularizationSmooth(u, t, t̂)
@test isapprox(A.λ, 0.138273889585313, rtol = tolerance)
ans = [0.21626377852882872 0.39235926952322575 0.5573848799950002 0.7072474496656729 0.8361906119247042 0.9313473799797176 0.9809844353757837 0.9750833208625507 0.9096038940899813 0.7816929736202427 0.6052694276527628 0.4015903497629387 0.1913719025253403 -0.01979786871512895 -0.23400354001942947 -0.44481229967011127 -0.6457913359497256 -0.8405146928672158 -1.0367229293434395 -1.2334090099343238]'
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(t̂), ans, rtol = tolerance)
# t̂ and wls
A = RegularizationSmooth(u, t, t̂, collect(1:npts))
@test isapprox(A.λ, 0.26746430253489195, rtol = tolerance)
ans = [0.3118247878815087 0.44275860852897864 0.5705834985506882 0.6979119448253899 0.8234189540704866 0.9289458273102476 0.9970803470992273 1.0071205506077525 0.9443157518324818 0.7954860908242515 0.5847385548859145 0.34813493129868633 0.1237494751337505 -0.0823517516424196 -0.28265170846635246 -0.4760833187699964 -0.6615795059853024 -0.844779821396189 -1.0341162283806349 -1.225270266213379]'
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(t̂), ans, rtol = tolerance)
# t̂, wls, and wr
nhalf = Int(floor(npts / 2))
wls = vcat(ones(nhalf), 10 * ones(npts - nhalf))
wr = collect(1:(N̂ - 2))
A = RegularizationSmooth(u, t, t̂, wls, wr)
@test isapprox(A.λ, 0.04555080890920959, rtol = tolerance)
ans = [
0.2799800686914433,
0.4627548444527547,
0.5611922868318674,
0.6647761469309206,
0.7910803348948329,
0.9096001134420562,
1.0067644979677808,
1.0541868144785513,
0.9889720466386331,
0.8088479651943575,
0.5677592185997403,
0.31309698432269184,
0.08587106716465115,
-0.11476265128730469,
-0.30749376694236485,
-0.4942769809562725,
-0.676806367664006,
-0.8587832527770329,
-1.0443430843364814,
-1.2309001260104093
]
@test isapprox(A.û, ans, rtol = tolerance)
@test isapprox(A.(t̂), ans, rtol = tolerance)
end
@testset "Extrapolation" begin
A = RegularizationSmooth(uₒ, tₒ; alg = :fixed, extrapolate = true)
@test A(10.0) == A.Aitp(10.0)
A = RegularizationSmooth(uₒ, tₒ; alg = :fixed)
@test_throws DataInterpolations.ExtrapolationError A(10.0)
end
@testset "Type inference" begin
A = RegularizationSmooth(uₒ, tₒ; alg = :fixed)
@test @inferred(A(1.0)) == A(1.0)
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 674 | using SafeTestsets
@safetestset "Quality Assurance" include("qa.jl")
@safetestset "Interface" include("interface.jl")
@safetestset "Parameter Tests" include("parameter_tests.jl")
@safetestset "Interpolation Tests" include("interpolation_tests.jl")
@safetestset "Derivative Tests" include("derivative_tests.jl")
@safetestset "Integral Tests" include("integral_tests.jl")
@safetestset "Integral Inverse Tests" include("integral_inverse_tests.jl")
@safetestset "Online Tests" include("online_tests.jl")
@safetestset "Regularization Smoothing" include("regularization.jl")
@safetestset "Show methods" include("show.jl")
@safetestset "Zygote support" include("zygote_tests.jl")
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 2981 | using DataInterpolations
using Optim, StableRNGs
using RegularizationTools
t = [1.0, 2.0, 3.0, 4.0, 5.0]
x = [1.0, 2.0, 3.0, 4.0, 5.0]
@testset "Generic Cases" begin
function test_show_line(A)
@testset "$(nameof(typeof(A)))" begin
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"$(nameof(typeof(A))) with $(length(A.t)) points\n")
end
end
methods = [
LinearInterpolation(x, t),
AkimaInterpolation(x, t),
QuadraticSpline(x, t),
CubicSpline(x, t)
]
test_show_line.(methods)
end
@testset "Specific Cases" begin
@testset "QuadraticInterpolation" begin
A = QuadraticInterpolation(x, t)
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"QuadraticInterpolation with 5 points, Forward mode\n")
end
@testset "LagrangeInterpolation" begin
A = LagrangeInterpolation(x, t)
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"LagrangeInterpolation with 5 points, with order 4\n")
end
@testset "ConstantInterpolation" begin
A = ConstantInterpolation(x, t)
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"ConstantInterpolation with 5 points, in left direction\n")
end
@testset "BSplineInterpolation" begin
A = BSplineInterpolation(x, t, 3, :Uniform, :Uniform)
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"BSplineInterpolation with 5 points, with degree 3\n")
end
@testset "BSplineApprox" begin
A = BSplineApprox(x, t, 2, 4, :Uniform, :Uniform)
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"BSplineApprox with 5 points, with degree 2, number of control points 4\n")
end
end
@testset "CurveFit" begin
rng = StableRNG(12345)
model(x, p) = @. p[1] / (1 + exp(x - p[2]))
t = range(-10, stop = 10, length = 40)
u = model(t, [1.0, 2.0]) + 0.01 * randn(rng, length(t))
p0 = [0.5, 0.5]
A = Curvefit(u, t, model, p0, LBFGS())
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"Curvefit with 40 points, using LBFGS\n")
end
@testset "RegularizationSmooth" begin
npts = 50
xmin = 0.0
xspan = 3 / 2 * π
x = collect(range(xmin, xmin + xspan, length = npts))
rng = StableRNG(655)
x = x + xspan / npts * (rand(rng, npts) .- 0.5)
# select a subset randomly
idx = unique(rand(rng, collect(eachindex(x)), 20))
t = x[unique(idx)]
npts = length(t)
ut = sin.(t)
stdev = 1e-1 * maximum(ut)
u = ut + stdev * randn(rng, npts)
# data must be ordered if t̂ is not provided
idx = sortperm(t)
tₒ = t[idx]
uₒ = u[idx]
A = RegularizationSmooth(uₒ, tₒ; alg = :fixed)
@test startswith(sprint(io -> show(io, MIME"text/plain"(), A)),
"RegularizationSmooth with 15 points, with regularization coefficient 1.0\n")
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | code | 4332 | using DataInterpolations
using ForwardDiff
using Zygote
function test_zygote(method, u, t; args = [], args_after = [], kwargs = [], name::String)
func = method(args..., u, t, args_after...; kwargs..., extrapolate = true)
trange = collect(range(minimum(t) - 5.0, maximum(t) + 5.0, step = 0.1))
trange_exclude = filter(x -> !in(x, t), trange)
@testset "$name, derivatives w.r.t. input" begin
for _t in trange_exclude
adiff = DataInterpolations.derivative(func, _t)
zdiff = u isa AbstractVector{<:Real} ? only(Zygote.gradient(func, _t)) :
only(Zygote.jacobian(func, _t))
isnothing(zdiff) && (zdiff = 0.0)
@test adiff ≈ zdiff
end
end
if method ∉ [LagrangeInterpolation, BSplineInterpolation, BSplineApprox]
@testset "$name, derivatives w.r.t. u" begin
function f(u)
A = method(args..., u, t, args_after...; kwargs..., extrapolate = true)
out = if u isa AbstractVector{<:Real}
zero(eltype(u))
elseif u isa AbstractMatrix
zero(u[:, 1])
else
zero(u[1])
end
for _t in trange
out += A(_t)
end
out
end
zgrad, fgrad = if u isa AbstractVector{<:Real}
Zygote.gradient(f, u), ForwardDiff.gradient(f, u)
elseif u isa AbstractMatrix
Zygote.jacobian(f, u), ForwardDiff.jacobian(f, u)
else
Zygote.jacobian(f, u), ForwardDiff.jacobian(f, hcat(u...))
end
end
end
end
@testset "LinearInterpolation" begin
u = vcat(collect(1.0:5.0), 2 * collect(6.0:10.0))
t = collect(1.0:10.0)
test_zygote(
LinearInterpolation, u, t; name = "Linear Interpolation")
end
@testset "Quadratic Interpolation" begin
u = [1.0, 4.0, 9.0, 16.0]
t = [1.0, 2.0, 3.0, 4.0]
test_zygote(QuadraticInterpolation, u, t; name = "Quadratic Interpolation")
end
@testset "Constant Interpolation" begin
u = [0.0, 2.0, 1.0, 3.0, 2.0, 6.0, 5.5, 5.5, 2.7, 5.1, 3.0]
t = collect(0.0:10.0)
test_zygote(ConstantInterpolation, u, t; name = "Constant Interpolation (vector)")
t = [1.0, 4.0]
u = [1.0 2.0; 0.0 1.0; 1.0 2.0; 0.0 1.0]
test_zygote(ConstantInterpolation, u, t, name = "Constant Interpolation (matrix)")
u = [[1.0, 2.0, 3.0, 4.0], [2.0, 3.0, 4.0, 5.0]]
test_zygote(
ConstantInterpolation, u, t, name = "Constant Interpolation (vector of vectors)")
end
@testset "Cubic Hermite Spline" begin
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
test_zygote(CubicHermiteSpline, u, t, args = [du], name = "Cubic Hermite Spline")
end
@testset "Quintic Hermite Spline" begin
ddu = [0.0, -0.00033, 0.0051, -0.0067, 0.0029, 0.0]
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
test_zygote(
QuinticHermiteSpline, u, t, args = [ddu, du], name = "Quintic Hermite Spline")
end
@testset "Quadratic Spline" begin
u = [1.0, 4.0, 9.0, 16.0]
t = [1.0, 2.0, 3.0, 4.0]
test_zygote(QuadraticSpline, u, t, name = "Quadratic Spline")
end
@testset "Lagrange Interpolation" begin
u = [1.0, 4.0, 9.0]
t = [1.0, 2.0, 3.0]
test_zygote(LagrangeInterpolation, u, t, name = "Lagrange Interpolation")
end
@testset "Constant Interpolation" begin
u = [0.0, 2.0, 1.0, 3.0, 2.0, 6.0, 5.5, 5.5, 2.7, 5.1, 3.0]
t = collect(0.0:10.0)
test_zygote(ConstantInterpolation, u, t, name = "Constant Interpolation")
end
@testset "Cubic Spline" begin
u = [0.0, 1.0, 3.0]
t = [-1.0, 0.0, 1.0]
test_zygote(CubicSpline, u, t, name = "Cubic Spline")
end
@testset "BSplines" begin
t = [0, 62.25, 109.66, 162.66, 205.8, 252.3]
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
test_zygote(BSplineInterpolation, u, t; args_after = [2, :Uniform, :Uniform],
name = "BSpline Interpolation")
test_zygote(BSplineApprox, u, t; args_after = [2, 4, :Uniform, :Uniform],
name = "BSpline approximation")
end
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 1118 | # DataInterpolations v5 Release Notes
## Breaking changes
- `AbstractInterpolation` is not a subtype of `AbstractVector` anymore. This was needed for previous versions of ModelingToolkit.jl to represent splines as vectors.
- Indexing overloads for `AbstractInterpolation` and the type parameter associated with it are removed. For example - `A` is an interpolation object:
+ Doing `A[i]` will error. Use `A.u[i]`.
+ `size(A)` will error. Use `size(A.u)` or `size(A.t)`.
- Removed deprecated bindings for `ZeroSpline` which is the same as `ConstantInterpolation`.
# DataInterpolations v6 Release Notes
## Breaking changes
- https://github.com/SciML/DataInterpolations.jl/pull/274 introduced caching of parameters for interpolations (released in v5.3) and also introduced a field `safetycopy` which was a boolean flag to create a copy of the data as the parameters would be invalid if data is mutated. This was removed in https://github.com/SciML/DataInterpolations.jl/pull/315 to introduce `cache_parameters` which made it explicit if a user wants to opt in for parameter caching or not.
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 6073 | # DataInterpolations.jl
[](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
[](https://docs.sciml.ai/DataInterpolations/stable/)
[](https://codecov.io/gh/SciML/DataInterpolations.jl)
[](https://github.com/SciML/DataInterpolations.jl/actions/workflows/Tests.yml)
[](https://github.com/SciML/ColPrac)
[](https://github.com/SciML/SciMLStyle)
[](https://doi.org/10.21105/joss.06917)
DataInterpolations.jl is a library for performing interpolations of one-dimensional data. By
"data interpolations" we mean techniques for interpolating possibly noisy data, and thus
some methods are mixtures of regressions with interpolations (i.e. do not hit the data
points exactly, smoothing out the lines). This library can be used to fill in intermediate
data points in applications like timeseries data.
## API
All interpolation objects act as functions. Thus for example, using an interpolation looks like:
```julia
u = rand(5)
t = 0:4
interp = LinearInterpolation(u, t)
interp(3.5) # Gives the linear interpolation value at t=3.5
```
We can efficiently interpolate onto a vector of new `t` values:
```julia
t′ = 0.5:1.0:3.5
interp(t′)
```
In-place interpolation also works:
```julia
u′ = similar(u, length(t′))
interp(u′, t′)
```
## Available Interpolations
In all cases, `u` an `AbstractVector` of values and `t` is an `AbstractVector` of timepoints
corresponding to `(u,t)` pairs.
- `ConstantInterpolation(u,t)` - A piecewise constant interpolation.
- `LinearInterpolation(u,t)` - A linear interpolation.
- `QuadraticInterpolation(u,t)` - A quadratic interpolation.
- `LagrangeInterpolation(u,t,n)` - A Lagrange interpolation of order `n`.
- `QuadraticSpline(u,t)` - A quadratic spline interpolation.
- `CubicSpline(u,t)` - A cubic spline interpolation.
- `AkimaInterpolation(u, t)` - Akima spline interpolation provides a smoothing effect and is computationally efficient.
- `BSplineInterpolation(u,t,d,pVec,knotVec)` - An interpolation B-spline. This is a B-spline which hits each of the data points. The argument choices are:
+ `d` - degree of B-spline
+ `pVec` - Symbol to Parameters Vector, `pVec = :Uniform` for uniform spaced parameters and `pVec = :ArcLen` for parameters generated by chord length method.
+ `knotVec` - Symbol to Knot Vector, `knotVec = :Uniform` for uniform knot vector, `knotVec = :Average` for average spaced knot vector.
- `BSplineApprox(u,t,d,h,pVec,knotVec)` - A regression B-spline which smooths the fitting curve. The argument choices are the same as the `BSplineInterpolation`, with the additional parameter `h<length(t)` which is the number of control points to use, with smaller `h` indicating more smoothing.
- `CubicHermiteSpline(du, u, t)` - A third order Hermite interpolation, which matches the values and first (`du`) order derivatives in the data points exactly.
- `PCHIPInterpolation(u, t)` - a type of `CubicHermiteSpline` where the derivative values `du` are derived from the input data in such a way that the interpolation never overshoots the data.
- `QuinticHermiteSpline(ddu, du, u, t)` - A fifth order Hermite interpolation, which matches the values and first (`du`) and second (`ddu`) order derivatives in the data points exactly.
## Extension Methods
The follow methods require extra dependencies and will be loaded as package extensions.
- `Curvefit(u,t,m,p,alg)` - An interpolation which is done by fitting a user-given functional form `m(t,p)` where `p` is the vector of parameters. The user's input `p` is a an initial value for a least-square fitting, `alg` is the algorithm choice to use for optimize the cost function (sum of squared deviations) via `Optim.jl` and optimal `p`s are used in the interpolation. Requires `using Optim`.
- `RegularizationSmooth(u,t,d;λ,alg)` - A regularization algorithm (ridge regression) which is done by minimizing an objective function (l2 loss + derivatives of order `d`) integrated in the time span. It is a global method and creates a smooth curve.
Requires `using RegularizationTools`.
## Plotting
DataInterpolations.jl is tied into the Plots.jl ecosystem, by way of RecipesBase.
Any interpolation can be plotted using the `plot` command (or any other), since they have type recipes associated with them.
For convenience, and to allow keyword arguments to propagate properly, DataInterpolations.jl also defines several series types, corresponding to different interpolations.
The series types defined are:
- `:linear_interp`
- `:quadratic_interp`
- `:lagrange_interp`
- `:quadratic_spline`
- `:cubic_spline`
- `:akima_interp`
- `:bspline_interp`
- `:bspline_approx`
- `:cubic_hermite_spline`
- `:pchip_interp`
- `:quintic_hermite_spline`
By and large, these accept the same keywords as their function counterparts.
## Citing
If you use this software in your work, please cite:
```bib
@article{Bhagavan2024,
doi = {10.21105/joss.06917},
url = {https://doi.org/10.21105/joss.06917},
year = {2024},
publisher = {The Open Journal},
volume = {9},
number = {101},
pages = {6917},
author = {Sathvik Bhagavan and Bart de Koning and Shubham Maddhashiya and Christopher Rackauckas},
title = {DataInterpolations.jl: Fast Interpolations of 1D data},
journal = {Journal of Open Source Software}
}
```
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 6983 | # DataInterpolations.jl
DataInterpolations.jl is a library for performing interpolations of one-dimensional data. Interpolations are a very important component of many modeling workflows. Often, sampled or measured inputs need to be transformed into continuous functions or smooth curves for simulation purposes. In many scientific machine learning workflows, interpolating data is essential to learn continuous models. DataInterpolations.jl can be used for facilitating these types of workflows. By "data interpolations" we mean techniques for interpolating possibly noisy data, and thus some methods are mixtures of regressions with interpolations (i.e. do not hit the data points exactly, smoothing out the lines).
## Installation
To install DataInterpolations.jl, use the Julia package manager:
```julia
using Pkg
Pkg.add("DataInterpolations")
```
## Available Interpolations
In all cases, `u` an `AbstractVector` of values and `t` is an `AbstractVector` of timepoints
corresponding to `(u,t)` pairs.
- `ConstantInterpolation(u,t)` - A piecewise constant interpolation.
- `LinearInterpolation(u,t)` - A linear interpolation.
- `QuadraticInterpolation(u,t)` - A quadratic interpolation.
- `LagrangeInterpolation(u,t,n)` - A Lagrange interpolation of order `n`.
- `QuadraticSpline(u,t)` - A quadratic spline interpolation.
- `CubicSpline(u,t)` - A cubic spline interpolation.
- `AkimaInterpolation(u, t)` - Akima spline interpolation provides a smoothing effect and is computationally efficient.
- `BSplineInterpolation(u,t,d,pVec,knotVec)` - An interpolation B-spline. This is a B-spline that hits each of the data points. The argument choices are:
+ `d` - degree of B-spline
+ `pVec` - Symbol to Parameters Vector, `pVec = :Uniform` for uniformly spaced parameters, and `pVec = :ArcLen` for parameters generated by the chord length method.
+ `knotVec` - Symbol to Knot Vector, `knotVec = :Uniform` for uniform knot vector, `knotVec = :Average` for average spaced knot vector.
- `BSplineApprox(u,t,d,h,pVec,knotVec)` - A regression B-spline which smooths the fitting curve. The argument choices are the same as the `BSplineInterpolation`, with the additional parameter `h<length(t)` which is the number of control points to use, with smaller `h` indicating more smoothing.
- `CubicHermiteSpline(du, u, t)` - A third order Hermite interpolation, which matches the values and first (`du`) order derivatives in the data points exactly.
- `PCHIPInterpolation(u, t)` - a type of `CubicHermiteSpline` where the derivative values `du` are derived from the input data in such a way that the interpolation never overshoots the data.
- `QuinticHermiteSpline(ddu, du, u, t)` - a fifth order Hermite interpolation, which matches the values and first (`du`) and second (`ddu`) order derivatives in the data points exactly.
## Extension Methods
The following methods require extra dependencies and will be loaded as package extensions.
- `Curvefit(u,t,m,p,alg)` - An interpolation which is done by fitting a user-given functional form `m(t,p)` where `p` is the vector of parameters. The user's input `p` is an initial value for a least-squares fitting, `alg` is the algorithm choice to use to optimize the cost function (sum of squared deviations) via `Optim.jl` and optimal `p`s are used in the interpolation. Requires `using Optim`.
- `RegularizationSmooth(u,t,d;λ,alg)` - A regularization algorithm (ridge regression) which is done by minimizing an objective function (l2 loss + derivatives of order `d`) integrated in the time span. It is a global method which creates a smooth curve.
Requires `using RegularizationTools`.
## Plotting
DataInterpolations.jl is tied into the Plots.jl ecosystem, by way of RecipesBase.
Any interpolation can be plotted using the `plot` command (or any other), since they have type recipes associated with them.
For convenience, and to allow keyword arguments to propagate properly, DataInterpolations.jl also defines several series types, corresponding to different interpolations.
The series types defined are:
- `:linear_interp`
- `:quadratic_interp`
- `:lagrange_interp`
- `:quadratic_spline`
- `:cubic_spline`
- `:akima_interp`
- `:bspline_interp`
- `:bspline_approx`
- `:cubic_hermite_spline`
- `:pchip_interp`
- `:quintic_hermite_spline`
By and large, these accept the same keywords as their function counterparts.
## Citing
If you use this software in your work, please cite:
```bib
@article{Bhagavan2024,
doi = {10.21105/joss.06917},
url = {https://doi.org/10.21105/joss.06917},
year = {2024},
publisher = {The Open Journal},
volume = {9},
number = {101},
pages = {6917},
author = {Sathvik Bhagavan and Bart de Koning and Shubham Maddhashiya and Christopher Rackauckas},
title = {DataInterpolations.jl: Fast Interpolations of 1D data},
journal = {Journal of Open Source Software}
}
```
## Contributing
- Please refer to the
[SciML ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://github.com/SciML/ColPrac/blob/master/README.md)
for guidance on PRs, issues, and other matters relating to contributing to SciML.
- See the [SciML Style Guide](https://github.com/SciML/SciMLStyle) for common coding practices and other style decisions.
- There are a few community forums:
+ The #diffeq-bridged and #sciml-bridged channels in the
[Julia Slack](https://julialang.org/slack/)
+ The #diffeq-bridged and #sciml-bridged channels in the
[Julia Zulip](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
+ On the [Julia Discourse forums](https://discourse.julialang.org)
+ See also [SciML Community page](https://sciml.ai/community/)
## Reproducibility
```@raw html
<details><summary>The documentation of this SciML package was built using these direct dependencies,</summary>
```
```@example
using Pkg # hide
Pkg.status() # hide
```
```@raw html
</details>
```
```@raw html
<details><summary>and using this machine and Julia version.</summary>
```
```@example
using InteractiveUtils # hide
versioninfo() # hide
```
```@raw html
</details>
```
```@raw html
<details><summary>A more complete overview of all dependencies and their versions is also provided.</summary>
```
```@example
using Pkg # hide
Pkg.status(; mode = PKGMODE_MANIFEST) # hide
```
```@raw html
</details>
```
```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml", String))["version"]
name = TOML.parse(read("../../Project.toml", String))["name"]
link_manifest = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
"/assets/Manifest.toml"
link_project = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
"/assets/Project.toml"
Markdown.parse("""You can also download the
[manifest]($link_manifest)
file and the
[project]($link_project)
file.
""")
```
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 3520 | # Interface for using the Interpolations object
We will again use the same data as the previous tutorial to demonstrate how to use the Interpolations object for computing interpolated values at any time point,
as well as derivatives and integrals.
```@example interface
using DataInterpolations
# Dependent variable
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
# Independent variable
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
```
## Interpolated values
All interpolation methods return an object from which we can compute the value of the dependent variable at any time point.
We will use the `CubicSpline` method for demonstration, but the API is the same for all the methods. We can also pass the `extrapolate = true` keyword if we want to allow the interpolation to go beyond the range of the timepoints. The default value is `extrapolate = false`.
```@example interface
A1 = CubicSpline(u, t)
# For interpolation do, A(t)
A1(100.0)
A2 = CubicSpline(u, t; extrapolate = true)
# Extrapolation
A2(300.0)
```
!!! note
The values computed beyond the range of the time points provided during interpolation will not be reliable, as these methods only perform well within the range and the first/last piece polynomial fit is extrapolated on either side which might not reflect the true nature of the data.
The keyword `cache_parameters = true` can be passed to precalculate parameters at initialization, making evalations cheaper to compute. This is not compatible with modifying `u` and `t`. The default `cache_parameters = false` does however not prevent allocation in every interpolation constructor call.
## Derivatives
Derivatives of the interpolated curves can also be computed at any point for all the methods. Derivatives upto second order is supported where first order derivative is computed analytically and second order using `ForwardDiff.jl`. Order is passed as the third argument. It is 1 by default.
We will continue with the above example, but the API is the same for all the methods. If the interpolation is defined with `extrapolate=true`, derivatives can also be extrapolated.
```@example interface
# derivative(A, t)
DataInterpolations.derivative(A1, 1.0, 1)
DataInterpolations.derivative(A1, 1.0, 2)
# Extrapolation
DataInterpolations.derivative(A2, 300.0)
```
## Integrals
Integrals of the interpolated curves can also be computed easily.
!!! note
Integrals for `LagrangeInterpolation`, `BSplineInterpolation`, `BSplineApprox`, `Curvefit` will error as there are no simple analytical solutions available. Please use numerical methods instead, such as [Integrals.jl](https://docs.sciml.ai/Integrals/stable/).
To compute the integrals from the start of time points provided during interpolation to any point, we can do:
```@example interface
# integral(A, t)
DataInterpolations.integral(A1, 5.0)
```
If we want to compute integrals between two points, we can do:
```@example interface
# integral(A, t1, t2)
DataInterpolations.integral(A1, 1.0, 5.0)
```
Again, if the interpolation is defined with `extrapolate=true`, the integral can be computed beyond the range of the timepoints.
```@example interface
# integral(A, t1, t2)
DataInterpolations.integral(A2, 200.0, 300.0)
```
!!! note
If the times provided in the integral go beyond the range of the time points provided during interpolation, it uses extrapolation methods to compute the values, and hence the integral can be misrepsentative and might not reflect the true nature of the data.
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 1361 | # Inverting integrals
Solving implicit integral problems of the following form is supported:
```math
\begin{equation}
\text{find $t$ such that } \int_{t_1}^t f(\tau)\text{d}\tau = V \ge 0,
\end{equation}
```
where $t_1$ is given by `first(A.t)`. This is supported for interpolations $f$ that are strictly positive and of one of these types:
- `ConstantInterpolation`
- `LinearInterpolation`
This is achieved by creating an 'integral inverse' interpolation object which can efficiently compute $t$ for a given value of $V$, see the example below.
```@example inverting_integrals
using Random #hide
Random.seed!(1234) # hide
using DataInterpolations
using Plots
# Create LinearInterpolation object from the
u = sqrt.(1:25) + (2.0 * rand(25) .- 1.0) / 3
t = cumsum(rand(25))
A = LinearInterpolation(u, t)
# Create LinearInterpolationIntInv object
# from the LinearInterpolation object
A_intinv = DataInterpolations.invert_integral(A)
# Get the t values up to and including the
# solution to the integral problem
V = 25.0
t_ = A_intinv(V)
ts = t[t .<= t_]
push!(ts, t_)
# Plot results
plot(A; label = "Linear Interpolation")
plot!(ts, A.(ts), fillrange = 0.0, fillalpha = 0.75,
fc = :blues, lw = 0, label = "Area of $V")
```
## Docstrings
```@docs
DataInterpolations.invert_integral
ConstantInterpolationIntInv
LinearInterpolationIntInv
```
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 319 | # Methods
```@docs
LinearInterpolation
QuadraticInterpolation
LagrangeInterpolation
AkimaInterpolation
ConstantInterpolation
QuadraticSpline
CubicSpline
BSplineInterpolation
BSplineApprox
CubicHermiteSpline
PCHIPInterpolation
QuinticHermiteSpline
```
# Utility Functions
```@docs
DataInterpolations.looks_linear
```
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 8543 | # Interpolation using different methods
We will use the following data to demonstrate interpolation methods.
```@example tutorial
using DataInterpolations, Plots
gr() # hide
# Dependent variable
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
# Independent variable
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
```
For each method, we will show how to perform the fit and use the plot recipe
to show the fitting curve.
## Linear Interpolation
This is a linear interpolation between the ends points of the interval of input data points.
```@example tutorial
A = LinearInterpolation(u, t)
plot(A)
```
## Quadratic Interpolation
This function fits a parabola passing through the two nearest points from the input
data point as well as the next-closest point on the right or left, depending on
whether the forward- or backward-looking mode is selected (default mode is
forward-looking). It is continuous and piecewise differentiable.
```@example tutorial
A = QuadraticInterpolation(u, t) # same as QuadraticInterpolation(u,t,:Forward)
# alternatively: A = QuadraticInterpolation(u,t,:Backward)
plot(A)
```
## Lagrange Interpolation
It fits a polynomial of degree d (=length(t)-1), and is thus a continuously
differentiable function.
```@example tutorial
A = LagrangeInterpolation(u, t)
plot(A)
```
## Akima Interpolation
This function fits piecewise cubic polynomials which forms a continuously differentiable function.
This differs from Cubic Spline as coefficients are computed using only neighbouring points and hence the
fit looks more natural.
```@example tutorial
A = AkimaInterpolation(u, t)
plot(A)
```
## Constant Interpolation
This function is constant between data points. By default,
it takes the value at the left end of the interval. One can change that behavior by
passing the keyword argument `dir = :right`.
```@example tutorial
A = ConstantInterpolation(u, t)
plot(A)
```
Or using the right endpoints:
```@example tutorial
A = ConstantInterpolation(u, t, dir = :right)
plot(A)
```
## Quadratic Spline
This is the quadratic spline. It is a continuously differentiable interpolation
which hits each of the data points exactly. Splines are a local interpolation
method, meaning that the curve in a given spot is only affected by the points
nearest to it.
```@example tutorial
A = QuadraticSpline(u, t)
plot(A)
```
## Cubic Spline
This is the cubic spline. It is a continuously twice differentiable interpolation
which hits each of the data points exactly.
```@example tutorial
A = CubicSpline(u, t)
plot(A)
```
## B-Splines
This is an interpolating B-spline. B-splines are a global method, meaning
that every data point is taken into account for each point of the curve.
The interpolating B-spline is the version which hits each of the points. This
method is described in more detail [here](https://pages.mtu.edu/%7Eshene/COURSES/cs3621/NOTES/INT-APP/CURVE-INT-global.html).
Let's plot a cubic B-spline (3rd order). Since the data points are not close to
uniformly spaced, we will use the `:ArcLen` and `:Average` choices:
```@example tutorial
A = BSplineInterpolation(u, t, 3, :ArcLen, :Average)
plot(A)
```
The approximating B-spline is a smoothed version of the B-spline. It again is
a global method. In this case, we need to give a number of control points
`length(t)>h` and this method fits a B-spline through the control points which
is a least square approximation. This has a natural effect of smoothing the
data. For example, if we use 4 control points, we get the result:
```@example tutorial
A = BSplineApprox(u, t, 3, 4, :ArcLen, :Average)
plot(A)
```
## Cubic Hermite Spline
This is the cubic (third order) Hermite interpolation. It matches the values and first order derivatives in the data points exactly.
```@example tutorial
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0011]
A = CubicHermiteSpline(du, u, t)
plot(A)
```
## PCHIP Interpolation
This is a type of `CubicHermiteSpline` where the derivative values `du` are derived from the input data in such a way that the interpolation never overshoots the data.
```@example tutorial
A = PCHIPInterpolation(u, t)
plot(A)
```
## Quintic Hermite Spline
This is the quintic (fifth order) Hermite interpolation. It matches the values and first and second order derivatives in the data points exactly.
```@example tutorial
ddu = [0.0, -0.00033, 0.0051, -0.0067, 0.0029, 0.0]
du = [-0.047, -0.058, 0.054, 0.012, -0.068, 0.0011]
A = QuinticHermiteSpline(ddu, du, u, t)
plot(A)
```
## Regularization Smoothing
Smoothing by regularization (a.k.a. ridge regression) finds a function ``\hat{u}``
that minimizes the objective function:
``Q(\hat{u}) = \int_{t_1}^{t_N} |\hat{u}(t) - u(t)|^2 \mathrm{d}t + \lambda \int_{\hat{t}_1}^{\hat{t}_N} |\hat{u}^{(d)}(\hat{t})|^2 \mathrm{d} \hat{t}``
where ``(d)`` denotes derivative order and ``\lambda`` is the regularization
(smoothing) parameter. The integrals are evaluated numerically at the set of
``t`` values for the first term and ``\hat{t}`` values for the second term
(equal to ``t`` if not provided). Regularization smoothing is a global method
that creates a smooth curve directly. See [Stickel (2010)
Comput. Chem. Eng. 34:467](http://dx.doi.org/10.1016/j.compchemeng.2009.10.007)
for details. The implementation in this package uses cubic splines to
interpolate between the smoothed points after they are determined.
```@example tutorial
using RegularizationTools
d = 2
λ = 1e3
A = RegularizationSmooth(u, t, d; λ = λ, alg = :fixed)
û = A.û
# interpolate using the smoothed values
N = 200
titp = collect(range(minimum(t), maximum(t), length = N))
uitp = A.(titp)
lw = 1.5
scatter(t, u, label = "data")
scatter!(t, û, marker = :square, label = "smoothed data")
plot!(titp, uitp, lw = lw, label = "smoothed interpolation")
```
## Dense Data Demonstration
Some methods are better suited for dense data. Let's generate such data to
demonstrate these methods.
```@example tutorial
import StableRNGs: StableRNG
rng = StableRNG(318)
t = sort(10 .* rand(rng, 100))
u = sin.(t) .+ 0.5 * randn(rng, 100);
```
## Regularization Smoothing
Although smoothing by regularization can be used to interpolate sparse data as
shown above, it is especially useful for dense as well as scattered data (unequally
spaced, unordered, and/or repeat-valued). Generalized cross validation (GCV) or
so-called L-curve methods can be used to determine an "optimal" value for the
smoothing parameter. In this example, we perform smoothing in two ways. In the
first, we find smooth values at the original ``t`` values and then
interpolate. In the second, we perform the smoothing for the interpolant
``\hat{t}`` values directly. GCV is used to determine the regularization
parameter for both cases.
```@example tutorial
d = 4
A = RegularizationSmooth(u, t, d; alg = :gcv_svd)
û = A.û
N = 200
titp = collect(range(minimum(t), maximum(t), length = N))
uitp = A.(titp)
Am = RegularizationSmooth(u, t, titp, d; alg = :gcv_svd)
ûm = Am.û
scatter(t, u, label = "simulated data", legend = :top)
scatter!(t, û, marker = (:square, 4), label = "smoothed data")
plot!(titp, uitp, lw = lw, label = "smoothed interpolation")
plot!(titp, ûm, lw = lw, linestyle = :dash, label = "smoothed, more points")
```
## Curve Fits
A curve fit works with both dense and sparse data. We will demonstrate the curve
fit on the dense data since we generated it based on `sin(t)`, so this is the
curve we want to fit through it. To do so, let's define a similar function
with parameters. Let's choose the form:
```@example tutorial
m(t, p) = @. p[1] * sin(p[2] * t) + p[3] * cos(p[4] * t)
```
Notice that this is a function on the whole array of `t` and expects an array
for the predicted `u` out. This choice of `m` is based on the assumption that our
function is of the form `p1*sin(p2*t)+p3*cos(p4*t)`. We want to find the `p` to
match our data. Let's start with the guess of every `p` being zero, that is
`p=ones(4)`. Then we would fit this curve using:
```@example tutorial
using Optim
A = Curvefit(u, t, m, ones(4), LBFGS())
plot(A)
```
We can check what the fitted parameters are via:
```@example tutorial
A.pmin
```
Notice that it essentially made `p3=0` with `p1=p2=1`, meaning it approximately
found `sin(t)`! But note that the ability to fit is dependent on the initial
parameters. For example, with `p=zeros(4)` as the initial parameters, the fit
is not good:
```@example tutorial
A = Curvefit(u, t, m, zeros(4), LBFGS())
plot(A)
```
And the parameters show the issue:
```@example tutorial
A.pmin
```
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 1853 | # Using DataInterpolations.jl with Symbolics.jl and ModelingToolkit.jl
All interpolation methods can be integrated with [Symbolics.jl](https://symbolics.juliasymbolics.org/stable/) and [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) seamlessly.
## Using with Symbolics.jl
### Expressions
```@example symbolics
using DataInterpolations, Symbolics
using Test
u = [0.0, 1.5, 0.0]
t = [0.0, 0.5, 1.0]
A = LinearInterpolation(u, t)
@variables τ
# Simple Expression
ex = cos(τ) * A(τ)
@test substitute(ex, Dict(τ => 0.5)) == cos(0.5) * A(0.5) # true
```
### Symbolic Derivatives
```@example symbolics
D = Differential(τ)
ex1 = A(τ)
# Derivative of interpolation
ex2 = expand_derivatives(D(ex1))
@test substitute(ex2, Dict(τ => 0.5)) == DataInterpolations.derivative(A, 0.5) # true
# Higher Order Derivatives
ex3 = expand_derivatives(D(D(A(τ))))
@test substitute(ex3, Dict(τ => 0.5)) == DataInterpolations.derivative(A, 0.5, 2) # true
```
## Using with ModelingToolkit.jl
Most common use case with [ModelingToolkit.jl](https://docs.sciml.ai/ModelingToolkit/stable/) is to plug in interpolation objects as input functions. This can be done using `TimeVaryingFunction` component of [ModelingToolkitStandardLibrary.jl](https://docs.sciml.ai/ModelingToolkitStandardLibrary/stable/).
```@example mtk
using DataInterpolations
using ModelingToolkitStandardLibrary.Blocks
using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D
using OrdinaryDiffEq
us = [0.0, 1.5, 0.0]
times = [0.0, 0.5, 1.0]
A = LinearInterpolation(us, times)
@named src = TimeVaryingFunction(A)
vars = @variables x(t) out(t)
eqs = [out ~ src.output.u, D(x) ~ 1 + out]
@named sys = ODESystem(eqs, t, vars, []; systems = [src])
sys = structural_simplify(sys)
prob = ODEProblem(sys, [x => 0.0], (times[1], times[end]))
sol = solve(prob)
```
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 6.4.5 | 0663b1c3567767469b59cda5ba36413daae695d4 | docs | 4996 | ---
title: 'DataInterpolations.jl: Fast Interpolations of 1D data'
tags:
- julia
- interpolations
authors:
- name: Sathvik Bhagavan
orcid: 0000-0003-0785-3586
corresponding: true
affiliation: 1
- name: Bart de Koning
orcid: 0009-0005-6134-6608
affiliation: 2
- name: Shubham Maddhashiya
affiliation: 3
- name: Christopher Rackauckas
orcid: 0000-0001-5850-0663
affiliation: "1, 3, 4"
affiliations:
- name: JuliaHub
index: 1
- name: Deltares
index: 2
- name: Pumas-AI
index: 3
- name: Massachusetts Institute of Technology
index: 4
date: 6 June 2024
bibliography: paper.bib
---
# Summary
Interpolations are used to estimate values between known data points using an approximate continuous function. DataInterpolations.jl is a Julia [@Bezanson2017] package containing 1D implementations of some of the most commonly used interpolation functions. These include:
- Constant Interpolation
- Linear Interpolation
- Quadratic Interpolation
- Lagrange Interpolation [@lagrange1898lectures]
- Quadratic Splines
- Cubic Splines [@Schoenberg1988]
- Akima Splines [@10.1145/321607.321609]
- Cubic Hermite Splines
- Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) [@doi:10.1137/0905021]
- Quintic Hermite Splines
- B-Splines [@Curry1988] [@DEBOOR197250]
- Regression based B-Splines
and a continually growing list. Along with these, the package also has methods to fit parameterized curves with the data points and Tikhonov regularization [@Tikhonov1943OnTS] [@amt-14-7909-2021] for obtaining smooth curves. The package also provides functionality to compute integrals and derivatives upto second order for those interpolations methods. It is also automatic differentiation friendly. It can also be used symbolically with Symbolics.jl [@10.1145/3511528.3511535] and plugged into models defined using ModelingToolkit.jl [@ma2021modelingtoolkit].
# Statement of need
Interpolations are a very important component of many modeling workflows. Often, sampled or measured inputs need to be transformed into continuous functions or smooth curves for simulation purposes. In many scientific machine learning workflows, interpolating data is essential to learn continuous models. DataInterpolations.jl can be used for facilitating these types of workflows. Several interpolation packages already exist in Julia, such as [Interpolations.jl](https://juliamath.github.io/Interpolations.jl/stable/), which primarily specializes in B-Splines and uniformly spaced data with some support for irregularly spaced data. In contrast, DataInterpolations.jl does not assume any specific structure in the data, offering greater flexibility for diverse datasets. [Interpolations.jl](https://juliamath.github.io/Interpolations.jl/stable/) also doesn't offer methods like Quadratic Interpolation, Lagrange Interpolation, Hermite Splines etc. [BasicInterpolators.jl](https://github.com/markmbaum/BasicInterpolators.jl) is more similar to DataInterpolations.jl, although it doesn't offer methods like B-Splines. Rest of the interpolation packages focus on particular methods like [BSplineKit.jl](https://github.com/jipolanco/BSplineKit.jl) for B-Splines, [FastChebInterp.jl](https://github.com/JuliaMath/FastChebInterp.jl) for Chebyshev interpolation, [PCHIPInterpolation](https://github.com/gerlero/PCHIPInterpolation.jl) for PCHIP interpolation etc. Additionally, DataInterpolations.jl includes many novel techniques for accelerating the interpolation searches with specialized caching, quasi-linear guessing, and more to improve the performance algorithmically, beyond the simple computational optimizations. In summary, DataInterpolations.jl is more generic from other packages and offers many fast interpolation methods for arbitrarily spaced 1D data, all within a consistent and simple interface.
# Example
The following tutorials are provided in the documentation:
- [Tutorial 1](https://docs.sciml.ai/DataInterpolations/stable/methods/) provides how to define each of the interpolation methods and compute the value at any point.
- [Tutorial 2](https://docs.sciml.ai/DataInterpolations/stable/interface/) provides explanation for using the interface and interpolated objects for evaluating at any point, computing the derivative at any point and computing the integral between any two points.
- [Tutorial 3](https://docs.sciml.ai/DataInterpolations/stable/symbolics/) provides how to use interpolation objects with Symbolics.jl and ModelingToolkit.jl.
A simple demonstration here:
```julia
using DataInterpolations
# Dependent variable
u = [14.7, 11.51, 10.41, 14.95, 12.24, 11.22]
# Independent variable
t = [0.0, 62.25, 109.66, 162.66, 205.8, 252.3]
A1 = CubicSpline(u, t)
# For interpolation do, A(t)
A1(100.0)
# derivative
## first order
DataInterpolations.derivative(A1, 1.0, 1)
## second order
DataInterpolations.derivative(A1, 1.0, 2)
# integral
DataInterpolations.integral(A1, 1.0, 5.0)
```
# References
| DataInterpolations | https://github.com/SciML/DataInterpolations.jl.git |
|
[
"MIT"
] | 0.1.5 | dbcd240388baf4139073e3d4f8b0dbe30a3234e5 | code | 2499 | module Bingomatic
using Plots
using StatsBase
using TextWrap
export make_card
export sample_words
export savefig
"""
sample_words(word_pool, n...=(5,5)...; free_space=true)
Sample words from word pool.
# Arguments
- `word_pool`: a list of words
- `n`: the dimensions of the sampled words
# Keywords
- `free_space`: sets center cell to "Free Space" if true
"""
function sample_words(word_pool, n...=(5,5)...; free_space=true)
n_cells = prod(n)
center = Int(ceil(n_cells/2))
length(word_pool) < n_cells ? error("list must have more elements than cells") : nothing
iseven(n_cells) ? error("Must have an odd number of cells") : nothing
cells = sample(word_pool, n, replace=false)
cells[center] = free_space ? "Free Space" : cells[center]
return cells
end
"""
make_card(list; word_size=12, line_width=8, break_words=false, kwargs...)
Make bingo card
# Arguments
- `list`: an array containing bingo words
# Keywords
- `backend=pyplot`: plotting backend.
- `word_size`: the size of words in points
- `line_width`: the number of characters per line for each word or phrase
- `break_words`: break long words when wrapping words
- `kwargs`: variable keyword arguments to pass to the `plot` function
"""
function make_card(list; backend=pyplot, word_size=12, line_width=8, break_words=false, kwargs...)
backend()
min_x = 0
max_x = 2
n_rows = size(list, 1)
n_cells = n_rows^2
offset = 1/n_rows
iseven(n_rows) ? error("n_rows must be odd") : nothing
length(list) < n_cells ? error("labels must have more elements than cells") : nothing
ticks = range(min_x, max_x, length=n_rows+1)
scale = range(min_x+offset, max_x-offset, length=n_rows)
x = [[i j] for i in scale for j in scale] |> x -> vcat(x...)
w_list = map(x -> TextWrap.wrap(x, width=line_width, break_long_words=break_words), list)
return scatter(x[:,1], x[:,2], markersize=0, xlims=(min_x,max_x), ylims=(min_x,max_x),
series_annotations = text.(w_list[:], :center, word_size), label = false, grid=true,
gridstyle=:solid, gridlinewidth=1.5, gridalpha=1, xticks=ticks, yticks=ticks,
xaxis=font(0), yaxis=font(0), xlabel="", ylabel="", frame_style=:box, size=(400,400);
kwargs...)
end
end
| Bingomatic | https://github.com/itsdfish/Bingomatic.jl.git |
|
[
"MIT"
] | 0.1.5 | dbcd240388baf4139073e3d4f8b0dbe30a3234e5 | code | 413 | using Bingomatic
using Test
@testset "Bingomatic.jl" begin
@test_throws ErrorException sample_words(["a","b"], 2)
@test_throws ErrorException sample_words(["a", "b", "c"], 4)
n_rows = 3
word_list = ["a","b","c","d","e","f","g","h","i"]
words = sample_words(word_list, n_rows, n_rows)
@test size(words) == (n_rows,n_rows)
@test words[5] == "Free Space"
card = make_card(words)
end
| Bingomatic | https://github.com/itsdfish/Bingomatic.jl.git |
|
[
"MIT"
] | 0.1.5 | dbcd240388baf4139073e3d4f8b0dbe30a3234e5 | docs | 1792 | # Bingomatic
Bingomatic™ is an awesome Julia package that creates bingo cards, using state-of-the-art bingo technology.
## Installation
To install Bingomatic, switch to packge mode by entering `]` in the REPL and enter:
```julia
add Bingomatic
```
Alternatively, enter the following into the REPL:
```julia
using Pkg; Pkg.add("Bingomatic")
```
## Standard Bingo Card
A standard size bingo card can be made with a few lines of code, as shown below.
```julia
using Bingomatic, Random
Random.seed!(780775)
word_pool = map(_->randstring(4), 1:50)
words = sample_words(word_pool)
card = make_card(words)
```
<img src="extras/example1.png" alt="" width="400" height="400">
## Customized Bingo Card
The example below shows that bingo cards can be be customized. It is also possible to override default values in `make_card` with variable keyword arguments.
```julia
n_rows = 3
word_pool = map(_->randstring(4), 1:50)
words = sample_words(word_pool, n_rows, n_rows)
card = make_card(words; size=(300,300))
```
<img src="extras/example2.png" alt="" width="300" height="300">
## Christmas Movie Bingo
Bingomatic™ includes a bonus word pool for Christmas Movie Bingo, a $19.99 value, absolutely free! Variations of this coveted game include a tribute to the [secret word](https://www.youtube.com/watch?v=gxMZgeBlqzQ) from Pee Wee's Playhouse where players scream each time the word "Christmas" is said.
```julia
using Bingomatic, CSV, DataFrames, Random
Random.seed!(1225)
pkg_path = dirname(pathof(Bingomatic))
path = joinpath(pkg_path,"../extras/word_pool.csv")
word_pool = CSV.read(path, DataFrame; stringtype=String)
words = sample_words(word_pool.words)
card = make_card(words; word_size=11, size=(630,630))
```
<img src="extras/christmas_bingo.png" alt="" width="630" height="630"> | Bingomatic | https://github.com/itsdfish/Bingomatic.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | code | 13755 |
"""
PythonPlot allows Julia to interface with the Matplotlib library in Python, specifically the matplotlib.pyplot module, so you can create beautiful plots in Julia with your favorite Python package.
In general, all the arguments are the same as in Python.
Here's a brief demo of a simple plot in Julia:
using PythonPlot
x = range(0; stop=2*pi, length=1000); y = sin.(3 * x + 4 * cos.(2 * x));
plot(x, y, color="red", linewidth=2.0, linestyle="--")
title("A sinusoidally modulated sinusoid")
For more information on API, see the matplotlib.pyplot documentation and the PythonPlot GitHub page.
"""
module PythonPlot
using PythonCall
export Figure, matplotlib, pyplot, pygui, withfig, plotshow, plotstep, plotclose
###########################################################################
# Define a documentation object
# that lazily looks up help from a Py object via zero or more keys.
# This saves us time when loading PythonPlot, since we don't have
# to load up all of the documentation strings right away.
struct LazyHelp
o # a Py or similar object supporting getindex with a __doc__ property
keys::Tuple{Vararg{String}}
LazyHelp(o) = new(o, ())
LazyHelp(o, k::AbstractString) = new(o, (k,))
LazyHelp(o, k1::AbstractString, k2::AbstractString) = new(o, (k1,k2))
LazyHelp(o, k::Tuple{Vararg{AbstractString}}) = new(o, k)
end
function Base.show(io::IO, ::MIME"text/plain", h::LazyHelp)
o = h.o
for k in h.keys
o = pygetattr(o, k)
end
if pyhasattr(o, "__doc__")
print(io, pyconvert(String, o.__doc__))
else
print(io, "no Python docstring found for ", o)
end
end
Base.show(io::IO, h::LazyHelp) = Base.show(io, "text/plain", h)
function Base.Docs.catdoc(hs::LazyHelp...)
Base.Docs.Text() do io
for h in hs
Base.show(io, MIME"text/plain"(), h)
end
end
end
###########################################################################
include("pygui.jl")
include("init.jl")
###########################################################################
# Wrapper around matplotlib Figure, supporting graphics I/O and pretty display
mutable struct Figure
o::Py
end
PythonCall.Py(f::Figure) = getfield(f, :o)
PythonCall.pyconvert(::Type{Figure}, o::Py) = Figure(o)
Base.:(==)(f::Figure, g::Figure) = pyconvert(Bool, Py(f) == Py(g))
Base.isequal(f::Figure, g::Figure) = isequal(Py(f), Py(g))
Base.hash(f::Figure, h::UInt) = hash(Py(f), h)
Base.Docs.doc(f::Figure) = Base.Docs.Text(pyconvert(String, Py(f).__doc__))
# Note: using `Union{Symbol,String}` produces ambiguity.
Base.getproperty(f::Figure, s::Symbol) = getproperty(Py(f), s)
Base.getproperty(f::Figure, s::AbstractString) = getproperty(f, Symbol(s))
Base.setproperty!(f::Figure, s::Symbol, x) = setproperty!(Py(f), s, x)
Base.setproperty!(f::Figure, s::AbstractString, x) = setproperty!(f, Symbol(s), x)
Base.hasproperty(f::Figure, s::Symbol) = pyhasattr(Py(f), s)
Base.propertynames(f::Figure) = propertynames(Py(f))
for (mime,fmt) in aggformats
@eval _showable(::MIME{Symbol($mime)}, f::Figure) = !isempty(f) && haskey(PyDict{Any,Any}(f.canvas.get_supported_filetypes()), $fmt)
@eval function Base.show(io::IO, m::MIME{Symbol($mime)}, f::Figure)
if !_showable(m, f)
throw(MethodError(Base.show, (io, m, f)))
end
f.canvas.print_figure(io, format=$fmt, bbox_inches="tight")
end
if fmt != "svg"
@eval Base.showable(m::MIME{Symbol($mime)}, f::Figure) = _showable(m, f)
end
end
# disable SVG output by default, since displaying large SVGs (large datasets)
# in IJulia is slow, and browser SVG display is buggy. (Similar to IPython.)
const SVG = [false]
Base.showable(m::MIME"image/svg+xml", f::Figure) = SVG[1] && _showable(m, f)
svg() = SVG[1]
svg(b::Bool) = (SVG[1] = b)
###########################################################################
# In IJulia, we want to automatically display any figures
# at the end of cell execution, and then close them. However,
# we don't want to display/close figures being used in withfig,
# since the user is keeping track of these in some other way,
# e.g. for interactive widgets.
Base.isempty(f::Figure) = isempty(f.get_axes())
# We keep a set of figure numbers for the figures used in withfig, because
# for these figures we don't want to auto-display or auto-close them
# when the cell finishes executing. (We store figure numbers, rather
# than Figure objects, since the latter would prevent the figures from
# finalizing and hence closing.) Closing the figure removes it from this set.
const withfig_fignums = Set{Int}()
function display_figs() # called after IJulia cell executes
if isjulia_display[]
for manager in Gcf.get_all_fig_managers()
f = manager.canvas.figure
if pyconvert(Int, f.number) ∉ withfig_fignums
fig = Figure(f)
isempty(fig) || display(fig)
pyplot.close(f)
end
end
end
end
function close_figs() # called after error in IJulia cell
if isjulia_display[]
for manager in Gcf.get_all_fig_managers()
f = manager.canvas.figure
if pyconvert(Int, f.number) ∉ withfig_fignums
pyplot.close(f)
end
end
end
end
# hook to force new IJulia cells to create new figure objects
const gcf_isnew = [false] # true to force next gcf() to be new figure
force_new_fig() = gcf_isnew[1] = true
# wrap gcf() and figure(...) so that we can force the creation
# of new figures in new IJulia cells (e.g. after @manipulate commands
# that leave the figure from the previous cell open).
@doc LazyHelp(orig_figure) function figure(args...; kws...)
gcf_isnew[1] = false
Figure(pycall(orig_figure, args...; kws...))
end
@doc LazyHelp(orig_gcf) function gcf()
if isjulia_display[] && gcf_isnew[1]
return figure()
else
return Figure(orig_gcf())
end
end
###########################################################################
# export documented pyplot API (http://matplotlib.org/api/pyplot_api.html)
export acorr,annotate,arrow,autoscale,autumn,axhline,axhspan,axis,axline,axvline,axvspan,bar,barbs,barh,bone,box,boxplot,broken_barh,cla,clabel,clf,clim,cohere,colorbar,colors,contour,contourf,cool,copper,csd,delaxes,disconnect,draw,errorbar,eventplot,figaspect,figimage,figlegend,figtext,figure,fill_between,fill_betweenx,findobj,flag,gca,gcf,gci,get_current_fig_manager,get_figlabels,get_fignums,get_plot_commands,ginput,gray,grid,hexbin,hist2D,hlines,hold,hot,hsv,imread,imsave,imshow,ioff,ion,ishold,jet,legend,locator_params,loglog,margins,matshow,minorticks_off,minorticks_on,over,pause,pcolor,pcolormesh,pie,pink,plot,plot_date,plotfile,polar,prism,psd,quiver,quiverkey,rc,rc_context,rcdefaults,rgrids,savefig,sca,scatter,sci,semilogx,semilogy,set_cmap,setp,specgram,spectral,spring,spy,stackplot,stem,streamplot,subplot,subplot2grid,subplot_tool,subplots,subplots_adjust,summer,suptitle,table,text,thetagrids,tick_params,ticklabel_format,tight_layout,title,tricontour,tricontourf,tripcolor,triplot,twinx,twiny,vlines,waitforbuttonpress,winter,xkcd,xlabel,xlim,xscale,xticks,ylabel,ylim,yscale,yticks,hist
# The following pyplot functions must be handled specially since they
# overlap with standard Julia functions:
# close, fill, show, step
# … unlike PyPlot.jl, we'll avoid type piracy by renaming / not exporting.
const plt_funcs = (:acorr,:annotate,:arrow,:autoscale,:autumn,:axes,:axhline,:axhspan,:axis,:axline,:axvline,:axvspan,:bar,:barbs,:barh,:bone,:box,:boxplot,:broken_barh,:cla,:clabel,:clf,:clim,:cohere,:colorbar,:colors,:connect,:contour,:contourf,:cool,:copper,:csd,:delaxes,:disconnect,:draw,:errorbar,:eventplot,:figaspect,:figimage,:figlegend,:figtext,:fill_between,:fill_betweenx,:findobj,:flag,:gca,:gci,:get_current_fig_manager,:get_figlabels,:get_fignums,:get_plot_commands,:ginput,:gray,:grid,:hexbin,:hlines,:hold,:hot,:hsv,:imread,:imsave,:imshow,:ioff,:ion,:ishold,:jet,:legend,:locator_params,:loglog,:margins,:matshow,:minorticks_off,:minorticks_on,:over,:pause,:pcolor,:pcolormesh,:pie,:pink,:plot,:plot_date,:plotfile,:polar,:prism,:psd,:quiver,:quiverkey,:rc,:rc_context,:rcdefaults,:rgrids,:savefig,:sca,:scatter,:sci,:semilogx,:semilogy,:set_cmap,:setp,:specgram,:spectral,:spring,:spy,:stackplot,:stem,:streamplot,:subplot,:subplot2grid,:subplot_tool,:subplots,:subplots_adjust,:summer,:suptitle,:table,:text,:thetagrids,:tick_params,:ticklabel_format,:tight_layout,:title,:tricontour,:tricontourf,:tripcolor,:triplot,:twinx,:twiny,:vlines,:waitforbuttonpress,:winter,:xkcd,:xlabel,:xlim,:xscale,:xticks,:ylabel,:ylim,:yscale,:yticks,:hist,:xcorr,:isinteractive)
for f in plt_funcs
sf = string(f)
@eval @doc LazyHelp(pyplot,$sf) function $f(args...; kws...)
if !pyhasattr(pyplot, $sf)
error("matplotlib ", version, " does not have pyplot.", $sf)
end
return pycall(pyplot.$sf, args...; kws...)
end
end
# rename to avoid type piracy:
@doc LazyHelp(pyplot,"step") plotstep(x, y; kws...) = pycall(pyplot.step, x, y; kws...)
# rename to avoid type piracy:
@doc LazyHelp(pyplot,"show") plotshow(; kws...) = begin pycall(pyplot.show; kws...); nothing; end
Base.close(f::Figure) = plotclose(f)
# rename to avoid type piracy:
@doc LazyHelp(pyplot,"close") plotclose() = pyplot.close()
plotclose(f::Figure) = pyconvert(Union{Nothing,Int}, plotclose(pyconvert(Int, f.number)))
function plotclose(f::Integer)
pop!(withfig_fignums, f, f)
pyplot.close(f)
end
plotclose(f::AbstractString) = pyplot.close(f)
# rename to avoid type piracy:
@doc LazyHelp(pyplot,"fill") plotfill(x::AbstractArray,y::AbstractArray, args...; kws...) =
pycall(pyplot.fill, PyAny, x, y, args...; kws...)
# consistent capitalization with mplot3d
@doc LazyHelp(pyplot,"hist2d") hist2D(args...; kws...) = pycall(pyplot.hist2d, args...; kws...)
# allow them to be accessed via their original names foo
# as PythonPlot.foo … this also means that we must be careful
# to use them as Base.foo in this module as needed!
const close = plotclose
const fill = plotfill
const show = plotshow
const step = plotstep
include("colormaps.jl")
###########################################################################
# Support array of string labels in bar chart
function bar(x::AbstractVector{<:AbstractString}, y; kws_...)
kws = Dict{Any,Any}(kws_)
xi = 1:length(x)
if !any(==(:align), keys(kws))
kws[:align] = "center"
end
p = bar(xi, y; kws...)
ax = any(kw -> kw[1] == :orientation && lowercase(kw[2]) == "horizontal",
pairs(kws)) ? gca().yaxis : gca().xaxis
ax.set_ticks(xi)
ax.set_ticklabels(x)
return p
end
bar(x::AbstractVector{T}, y; kws...) where {T<:Symbol} =
bar(map(string, x), y; kws...)
###########################################################################
# Allow plots with 2 independent variables (contour, surf, ...)
# to accept either 2 1d arrays or a row vector and a 1d array,
# to simplify construction of such plots via broadcasting operations.
# (Matplotlib is inconsistent about this.)
include("plot3d.jl")
for f in (:contour, :contourf)
@eval function $f(X::AbstractMatrix, Y::AbstractVector, args...; kws...)
if size(X,1) == 1 || size(X,2) == 1
$f(reshape(X, length(X)), Y, args...; kws...)
elseif size(X,1) > 1 && size(X,2) > 1 && isempty(args)
$f(X; levels=Y, kws...) # treat Y as contour levels
else
throw(ArgumentError("if 2nd arg is column vector, 1st arg must be row or column vector"))
end
end
end
for f in (:surf,:mesh,:plot_surface,:plot_wireframe,:contour3D,:contourf3D)
@eval begin
function $f(X::AbstractVector, Y::AbstractVector, Z::AbstractMatrix, args...; kws...)
m, n = length(X), length(Y)
$f(repeat(transpose(X),outer=(n,1)), repeat(Y,outer=(1,m)), Z, args...; kws...)
end
function $f(X::AbstractMatrix, Y::AbstractVector, Z::AbstractMatrix, args...; kws...)
if size(X,1) != 1 && size(X,2) != 1
throw(ArgumentError("if 2nd arg is column vector, 1st arg must be row or column vector"))
end
m, n = length(X), length(Y)
$f(repeat(transpose(X),outer=(n,1)), repeat(Y,outer=(1,m)), Z, args...; kws...)
end
end
end
# Already work: barbs, pcolor, pcolormesh, quiver
# Matplotlib pcolor* functions accept 1d arrays but not ranges
for f in (:pcolor, :pcolormesh)
@eval begin
$f(X::AbstractRange, Y::AbstractRange, args...; kws...) = $f([X...], [Y...], args...; kws...)
$f(X::AbstractRange, Y::AbstractArray, args...; kws...) = $f([X...], Y, args...; kws...)
$f(X::AbstractArray, Y::AbstractRange, args...; kws...) = $f(X, [Y...], args...; kws...)
end
end
###########################################################################
# a more pure functional style, that returns the figure but does *not*
# have any display side-effects. Mainly for use with @manipulate (Interact.jl)
function withfig(actions::Function, f::Figure; clear=true)
ax_save = gca()
push!(withfig_fignums, f.number)
figure(f.number)
finalizer(plotclose, f)
try
if clear && !isempty(f)
clf()
end
actions()
catch
rethrow()
finally
try
sca(ax_save) # may fail if axes were overwritten
catch
end
Main.IJulia.undisplay(f)
end
return f
end
###########################################################################
using LaTeXStrings
export LaTeXString, latexstring, @L_str, @L_mstr
end # module PythonPlot
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | code | 7917 | # Conveniences for working with and displaying matplotlib colormaps,
# integrating with the Julia Colors package
using Colors
export ColorMap, get_cmap, register_cmap, get_cmaps
########################################################################
# Wrapper around colors.Colormap type:
mutable struct ColorMap
o::Py
end
PythonCall.Py(c::ColorMap) = getfield(c, :o)
PythonCall.pyconvert(::Type{ColorMap}, o::Py) = ColorMap(o)
Base.:(==)(c::ColorMap, g::ColorMap) = pyconvert(Bool, Py(c) == Py(g))
Base.isequal(c::ColorMap, g::ColorMap) = isequal(Py(c), Py(g))
Base.hash(c::ColorMap, h::UInt) = hash(Py(c), h)
PythonCall.pycall(c::ColorMap, args...; kws...) = pycall(Py(c), args...; kws...)
(c::ColorMap)(args...; kws...) = pycall(Py(c), args...; kws...)
Base.Docs.doc(c::ColorMap) = Base.Docs.Text(pyconvert(String, Py(c).__doc__))
# Note: using `Union{Symbol,String}` produces ambiguity.
Base.getproperty(c::ColorMap, s::Symbol) = getproperty(Py(c), s)
Base.getproperty(c::ColorMap, s::AbstractString) = getproperty(Py(c), Symbol(s))
Base.setproperty!(c::ColorMap, s::Symbol, x) = setproperty!(Py(c), s, x)
Base.setproperty!(c::ColorMap, s::AbstractString, x) = setproperty!(Py(c), Symbol(s), x)
Base.propertynames(c::ColorMap) = propertynames(Py(c))
Base.hasproperty(c::ColorMap, s::Union{Symbol,AbstractString}) = pyhasattr(Py(c), s)
function Base.show(io::IO, c::ColorMap)
print(io, "ColorMap \"$(pyconvert(String, c.name))\"")
end
# all Python dependencies must be initialized at runtime (not when precompiled)
const colorsm = PythonCall.pynew()
const cm = PythonCall.pynew()
const LinearSegmentedColormap = PythonCall.pynew()
const cm_get_cmap = PythonCall.pynew()
const cm_register_cmap = PythonCall.pynew()
const ScalarMappable = PythonCall.pynew()
const Normalize01 = PythonCall.pynew()
function init_colormaps()
PythonCall.pycopy!(colorsm, pyimport("matplotlib.colors"))
PythonCall.pycopy!(cm, pyimport("matplotlib.cm"))
# pytype_mapping(colorsm.Colormap, ColorMap)
PythonCall.pycopy!(LinearSegmentedColormap, colorsm.LinearSegmentedColormap)
PythonCall.pycopy!(cm_get_cmap, pyhasattr(pyplot, "get_cmap") ? pyplot.get_cmap : cm.get_cmap)
PythonCall.pycopy!(cm_register_cmap, pyhasattr(matplotlib, "colormaps") && pyhasattr(matplotlib.colormaps, "register") ? matplotlib.colormaps.register : cm.register_cmap)
PythonCall.pycopy!(ScalarMappable, cm.ScalarMappable)
PythonCall.pycopy!(Normalize01, pycall(colorsm.Normalize; vmin=0,vmax=1))
end
########################################################################
# ColorMap constructors via colors.LinearSegmentedColormap
# most general constructors using RGB arrays of triples, defined
# as for matplotlib.colors.LinearSegmentedColormap
ColorMap(name::Union{AbstractString,Symbol},
r::AbstractVector{Tuple{T,T,T}},
g::AbstractVector{Tuple{T,T,T}},
b::AbstractVector{Tuple{T,T,T}},
n=max(256,length(r),length(g),length(b)), gamma=1.0) where {T<:Real} =
ColorMap(name, r,g,b, Array{Tuple{T,T,T}}(undef, 0), n, gamma)
# as above, but also passing an alpha array
function ColorMap(name::Union{AbstractString,Symbol},
r::AbstractVector{Tuple{T,T,T}},
g::AbstractVector{Tuple{T,T,T}},
b::AbstractVector{Tuple{T,T,T}},
a::AbstractVector{Tuple{T,T,T}},
n=max(256,length(r),length(g),length(b),length(a)),
gamma=1.0) where T<:Real
segmentdata = Dict("red" => pylist(r), "green" => pylist(g), "blue" => pylist(b))
if !isempty(a)
segmentdata["alpha"] = a
end
ColorMap(LinearSegmentedColormap(name, segmentdata, n, gamma))
end
# create from an array c, assuming linear mapping from [0,1] to c
function ColorMap(name::Union{AbstractString,Symbol},
c::AbstractVector{T}, n=max(256, length(c)), gamma=1.0) where T<:Colorant
nc = length(c)
if nc == 0
throw(ArgumentError("ColorMap requires a non-empty Colorant array"))
end
r = Array{Tuple{Float64,Float64,Float64}}(undef, nc)
g = similar(r)
b = similar(r)
a = T <: TransparentColor ?
similar(r) : Array{Tuple{Float64,Float64,Float64}}(undef, 0)
for i = 1:nc
x = (i-1) / (nc-1)
if T <: TransparentColor
rgba = convert(RGBA{Float64}, c[i])
r[i] = (x, rgba.r, rgba.r)
b[i] = (x, rgba.b, rgba.b)
g[i] = (x, rgba.g, rgba.g)
a[i] = (x, rgba.alpha, rgba.alpha)
else
rgb = convert(RGB{Float64}, c[i])
r[i] = (x, rgb.r, rgb.r)
b[i] = (x, rgb.b, rgb.b)
g[i] = (x, rgb.g, rgb.g)
end
end
ColorMap(name, r,g,b,a, n, gamma)
end
ColorMap(c::AbstractVector{T},
n=max(256, length(c)), gamma=1.0) where {T<:Colorant} =
ColorMap(string("cm_", hash(c)), c, n, gamma)
function ColorMap(name::Union{AbstractString,Symbol}, c::AbstractMatrix{T},
n=max(256, size(c,1)), gamma=1.0) where T<:Real
if size(c,2) == 3
return ColorMap(name,
[RGB{T}(c[i,1],c[i,2],c[i,3]) for i in 1:size(c,1)],
n, gamma)
elseif size(c,2) == 4
return ColorMap(name,
[RGBA{T}(c[i,1],c[i,2],c[i,3],c[i,4])
for i in 1:size(c,1)],
n, gamma)
else
throw(ArgumentError("color matrix must have 3 or 4 columns"))
end
end
ColorMap(c::AbstractMatrix{T}, n=max(256, size(c,1)), gamma=1.0) where {T<:Real} =
ColorMap(string("cm_", hash(c)), c, n, gamma)
########################################################################
@doc LazyHelp(cm_get_cmap) get_cmap() = ColorMap(cm_get_cmap())
get_cmap(name::AbstractString) = ColorMap(cm_get_cmap(name))
get_cmap(name::AbstractString, lut::Integer) = ColorMap(cm_get_cmap(name, lut))
get_cmap(c::ColorMap) = c
ColorMap(name::AbstractString) = get_cmap(name)
@doc LazyHelp(cm_register_cmap) register_cmap(c::ColorMap) = cm_register_cmap(c)
register_cmap(n::AbstractString, c::ColorMap) = cm_register_cmap(n,c)
# convenience function to get array of registered colormaps
get_cmaps() =
ColorMap[get_cmap(c) for c in
sort(filter!(c -> !endswith(c, "_r"),
[pyconvert(String, c) for c in cm.datad]),
by=lowercase)]
########################################################################
# display of ColorMaps as a horizontal color bar in SVG
function Base.show(io::IO, ::MIME"image/svg+xml", cs::AbstractVector{ColorMap})
n = 256
nc = length(cs)
a = range(0; stop=1, length=n)
namelen = mapreduce(c -> length(c.name), max, cs)
width = 0.5
height = 5
pad = 0.5
write(io,
"""
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" version="1.1"
width="$(n*width+1+namelen*4)mm" height="$((height+pad)*nc)mm"
shape-rendering="crispEdges">
""")
for j = 1:nc
c = cs[j]
y = (j-1) * (height+pad)
write(io, """<text x="$(n*width+1)mm" y="$(y+3.8)mm" font-size="3mm">$(c.name)</text>""")
rgba = PyArray(pycall(ScalarMappable; cmap=c, norm=Normalize01).to_rgba(a))
for i = 1:n
write(io, """<rect x="$((i-1)*width)mm" y="$(y)mm" width="$(width)mm" height="$(height)mm" fill="#$(hex(RGB(rgba[i,1],rgba[i,2],rgba[i,3])))" stroke="none" />""")
end
end
write(io, "</svg>")
end
function Base.show(io::IO, m::MIME"image/svg+xml", c::ColorMap)
Base.show(io, m, [c])
end
function Base.show(io::IO, m::MIME"text/html", c::ColorMap)
Base.show(io, m, Py(c))
end
########################################################################
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | code | 7990 | # PythonPlot initialization — the hardest part is finding a working backend.
using VersionParsing
###########################################################################
# global PyObject constants that get initialized at runtime. We
# initialize them here (rather than via "global foo = ..." in __init__)
# so that their type is known at compile-time.
const matplotlib = PythonCall.pynew()
const pyplot = PythonCall.pynew()
const Gcf = PythonCall.pynew()
const orig_draw = PythonCall.pynew()
const orig_gcf = PythonCall.pynew()
const orig_figure = PythonCall.pynew()
const orig_show = PythonCall.pynew()
###########################################################################
# file formats supported by Agg backend, from MIME types
const aggformats = Dict("application/eps" => "eps",
"image/eps" => "eps",
"application/pdf" => "pdf",
"image/png" => "png",
"application/postscript" => "ps",
"image/svg+xml" => "svg")
# In 0.6, TextDisplay can show e.g. image/svg+xml as text (#281).
# Any "real" graphical display should support PNG, I hope.
isdisplayok() = displayable(MIME("image/png"))
###########################################################################
# We allow the user to turn on or off the Python gui interactively via
# pygui(true/false). This is done by loading pyplot with a GUI backend
# if possible, then switching to a Julia-display backend (if available)
# like get(dict, key, default), but treats a value of "nothing" as a missing key
function getnone(dict, key, default)
ret = get(dict, key, default)
return ret === nothing ? default : ret
end
# return (backend,gui) tuple
function find_backend(matplotlib::Py)
gui2matplotlib = Dict(:wx=>"WXAgg",:gtk=>"GTKAgg",:gtk3=>"GTK3Agg",
:qt_pyqt4=>"Qt4Agg", :qt_pyqt5=>"Qt5Agg",
:qt_pyside=>"Qt4Agg", :qt4=>"Qt4Agg",
:qt5=>"Qt5Agg", :qt=>"Qt4Agg",:tk=>"TkAgg")
if Sys.islinux()
guis = [:tk, :gtk3, :gtk, :qt5, :qt4, :wx]
else
guis = [:tk, :qt5, :qt4, :wx, :gtk, :gtk3]
end
options = [(g,gui2matplotlib[g]) for g in guis]
matplotlib2gui = Dict("wx"=>:wx, "wxagg"=>:wx,
"gtkagg"=>:gtk, "gtk"=>:gtk,"gtkcairo"=>:gtk,
"gtk3agg"=>:gtk3, "gtk3"=>:gtk3,"gtk3cairo"=>:gtk3,
"qt5agg"=>:qt5, "qt4agg"=>:qt4, "tkagg"=>:tk,
"agg"=>:none,"ps"=>:none,"pdf"=>:none,
"svg"=>:none,"cairo"=>:none,"gdk"=>:none,
"module://gr.matplotlib.backend_gr"=>:gr)
qt2gui = Dict("pyqt5"=>:qt_pyqt5, "pyqt4"=>:qt_pyqt4, "pyside"=>:qt_pyside)
rcParams = PyDict{Any,Any}(matplotlib.rcParams)
default = lowercase(get(ENV, "MPLBACKEND",
getnone(rcParams, "backend", "none")))
if haskey(matplotlib2gui,default)
defaultgui = matplotlib2gui[default]
insert!(options, 1, (defaultgui,default))
end
try
# We will get an exception when we import pyplot below (on
# Unix) if an X server is not available, even though
# pygui_works and matplotlib.use(backend) succeed, at
# which point it will be too late to switch backends. So,
# throw exception (drop to catch block below) if DISPLAY
# is not set. [Might be more reliable to test
# success(`xdpyinfo`), but only if xdpyinfo is installed.]
if options[1][1] != :none && Sys.isunix() && !Sys.isapple()
ENV["DISPLAY"]
end
global gui
if gui === :default
# try to ensure that GUI both exists and has a matplotlib backend
for (g,b) in options
if g == :none # Matplotlib is configured to be non-interactive
pygui(:default)
matplotlib.use(b)
matplotlib.interactive(false)
return (b, g)
elseif g == :gr
return (b, g)
elseif pygui_works(g)
# must call matplotlib.use *before* loading backends module
matplotlib.use(b)
if g == :qt || g == :qt4
g = qt2gui[lowercase(rcParams["backend.qt4"])]
if !pyexists("PyQt5") && !pyexists("PyQt4")
# both Matplotlib and PyCall default to PyQt4
# if it is available, but we need to tell
# Matplotlib to use PySide otherwise.
rcParams["backend.qt4"] = "PySide"
end
end
if pyexists("matplotlib.backends.backend_" * lowercase(b))
isjulia_display[] || pygui_start(g)
matplotlib.interactive(!isjulia_display[] && Base.isinteractive())
return (b, g)
end
end
end
error("no gui found") # go to catch clause below
else # the user specified a desired backend via pygui(gui)
gui = pygui()
matplotlib."use"(gui2matplotlib[gui])
if (gui==:qt && !pyexists("PyQt5") && !pyexists("PyQt4")) || gui==:qt_pyside
rcParams["backend.qt4"] = "PySide"
end
isjulia_display[] || pygui_start(gui)
matplotlib.interactive(!isjulia_display[] && Base.isinteractive())
return (gui2matplotlib[gui], gui)
end
catch e
@warn(e) # provide information for debugging why a gui was not found
if !isjulia_display[]
@warn("No working GUI backend found for matplotlib")
isjulia_display[] = true
end
pygui(:default)
matplotlib.use("Agg") # GUI not available
matplotlib.interactive(false)
return ("Agg", :none)
end
end
# declare more globals created in __init__
const isjulia_display = Ref(true)
version = v"0.0.0"
backend = "Agg"
gui = :default
# initialization -- anything that depends on Python has to go here,
# so that it occurs at runtime (while the rest of PythonPlot can be precompiled).
function __init__()
ccall(:jl_generating_output, Cint, ()) == 1 && return nothing
isjulia_display[] = isdisplayok()
PythonCall.pycopy!(matplotlib, pyimport("matplotlib"))
mvers = pyconvert(String, matplotlib.__version__)
global version = try
vparse(mvers)
catch
v"0.0.0" # fallback
end
backend_gui = find_backend(matplotlib)
# workaround JuliaLang/julia#8925
global backend = backend_gui[1]
global gui = backend_gui[2]
PythonCall.pycopy!(pyplot, pyimport("matplotlib.pyplot")) # raw Python module
PythonCall.pycopy!(Gcf, pyimport("matplotlib._pylab_helpers").Gcf)
PythonCall.pycopy!(orig_gcf, pyplot.gcf)
PythonCall.pycopy!(orig_figure, pyplot.figure)
pyplot.gcf = gcf
pyplot.figure = figure
if isdefined(Main, :IJulia) && Main.IJulia.inited
Main.IJulia.push_preexecute_hook(force_new_fig)
Main.IJulia.push_postexecute_hook(display_figs)
Main.IJulia.push_posterror_hook(close_figs)
end
if isjulia_display[] && gui != :gr && backend != "Agg"
pyplot.switch_backend("Agg")
pyplot.ioff()
end
init_colormaps()
end
function pygui(b::Bool)
if !b != isjulia_display[]
if backend != "Agg"
pyplot.switch_backend(b ? backend : "Agg")
if b
pygui_start(gui) # make sure event loop is started
Base.isinteractive() && pyplot.ion()
else
pyplot.ioff()
end
elseif b
error("No working GUI backend found for matplotlib.")
end
isjulia_display[] = !b
end
return b
end
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | code | 3630 | ###########################################################################
# Lazy wrapper around a Py to load a module on demand.
mutable struct LazyPyModule
name::String
o::Py
LazyPyModule(n::AbstractString) = new(n, PythonCall.pynew())
end
_ispynull(x::Py) = PythonCall.getptr(x) == PythonCall.C.PyNULL
PythonCall.Py(m::LazyPyModule) = _ispynull(getfield(m, :o)) ? PythonCall.pycopy!(getfield(m, :o), pyimport(getfield(m, :name))) : getfield(m, :o)
Base.Docs.doc(m::LazyPyModule) = Base.Docs.Text(pyconvert(String, Py(m).__doc__))
Base.getproperty(m::LazyPyModule, x::Symbol) = getproperty(Py(m), x)
Base.setproperty!(m::LazyPyModule, x::Symbol, v) = setproperty!(Py(m), x, v)
Base.hasproperty(m::LazyPyModule, x::Symbol) = pyhasattr(Py(m), x)
Base.propertynames(m::LazyPyModule) = propertynames(Py(m))
###########################################################################
# Lazily load mplot3d modules. This (slightly) improves load time of PythonPlot,
# and it also allows PythonPlot to load on systems where mplot3d is not installed.
const axes3D = LazyPyModule("mpl_toolkits.mplot3d.axes3d")
const art3D = LazyPyModule("mpl_toolkits.mplot3d.art3d")
"""
using3D()
This function ensures that the `mplot3d` module is loaded for 3d
plotting. This occurs automatically if you call any of the
3d plotting functions like `plot3D` or `surf`, but it may be
necessary to call this function manually if you are passing
`projection="3d"` explicitly to axes or subplot objects.
"""
using3D() = (Py(axes3D); nothing)
###########################################################################
# 3d plotting functions from mplot3d
export art3D, Axes3D, using3D, surf, mesh, bar3D, contour3D, contourf3D, plot3D, plot_surface, plot_trisurf, plot_wireframe, scatter3D, text2D, text3D, zlabel, zlim, zscale, zticks
const mplot3d_funcs = (:bar3d, :contour3D, :contourf3D, :plot3D, :plot_surface,
:plot_trisurf, :plot_wireframe, :scatter3D,
:text2D, :text3D, :view_init, :voxels)
function gca3d()
using3D() # make sure mplot3d is loaded
return version <= v"3.4" ? gca(projection="3d") : pyplot.subplot(gca().get_subplotspec(), projection="3d")
end
for f in mplot3d_funcs
fs = string(f)
@eval @doc LazyHelp(axes3D,"Axes3D", $fs) function $f(args...; kws...)
pycall(gca3d().$fs, args...; kws...)
end
end
@doc LazyHelp(axes3D,"Axes3D") Axes3D(args...; kws...) = pycall(axes3D."Axes3D", args...; kws...)
# correct for annoying mplot3d inconsistency
@doc LazyHelp(axes3D,"Axes3D", "bar3d") bar3D(args...; kws...) = bar3d(args...; kws...)
# it's annoying to have xlabel etc. but not zlabel
const zlabel_funcs = (:zlabel, :zlim, :zscale, :zticks)
for f in zlabel_funcs
fs = string("set_", f)
@eval @doc LazyHelp(axes3D,"Axes3D", $fs) function $f(args...; kws...)
pycall(gca3d().$fs, args...; kws...)
end
end
# export Matlab-like names
function surf(Z::AbstractMatrix; kws...)
plot_surface([1:size(Z,1);]*ones(1,size(Z,2)),
ones(size(Z,1))*[1:size(Z,2);]', Z; kws...)
end
@doc LazyHelp(axes3D,"Axes3D", "plot_surface") function surf(X, Y, Z::AbstractMatrix, args...; kws...)
plot_surface(X, Y, Z, args...; kws...)
end
function surf(X, Y, Z::AbstractVector, args...; kws...)
plot_trisurf(X, Y, Z, args...; kws...)
end
@doc LazyHelp(axes3D,"Axes3D", "plot_wireframe") mesh(args...; kws...) = plot_wireframe(args...; kws...)
function mesh(Z::AbstractMatrix; kws...)
plot_wireframe([1:size(Z,1);]*ones(1,size(Z,2)),
ones(size(Z,1))*[1:size(Z,2);]', Z; kws...)
end
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | code | 9607 | # replacement for PyCall pygui stuff
# GUI event loops and toolkit integration for Python, most importantly
# to support plotting in a window without blocking. Currently, we
# support wxWidgets, Qt5, Qt4, and GTK+.
############################################################################
# global variable to specify default GUI toolkit to use
gui = :default # one of :default, :wx, :qt, :tk, or :gtk
# for some reason, `util` is not always imported as an attribute to `importlib`:
#PythonCall.PyException(<py AttributeError("module 'importlib' has no attribute 'util'")>)
pyexists(mod::AbstractString) = pyconvert(Bool, pyimport("importlib.util").find_spec(mod) != Py(nothing))
pygui_works(gui::Symbol) = gui == :default ||
((gui == :wx && pyexists("wx")) ||
(gui == :gtk && pyexists("gtk")) ||
(gui == :gtk3 && pyexists("gi")) ||
(gui == :tk && pyexists("tkinter")) ||
(gui == :qt_pyqt4 && pyexists("PyQt4")) ||
(gui == :qt_pyqt5 && pyexists("PyQt5")) ||
(gui == :qt_pyside && pyexists("PySide")) ||
(gui == :qt_pyside2 && pyexists("PySide2")) ||
(gui == :qt4 && (pyexists("PyQt4") || pyexists("PySide"))) ||
(gui == :qt5 && (pyexists("PyQt5") || pyexists("PySide2"))) ||
(gui == :qt && (pyexists("PyQt5") || pyexists("PyQt4") || pyexists("PySide") || pyexists("PySide2"))))
# get or set the default GUI; doesn't affect running GUI
"""
pygui()
Return the current GUI toolkit as a symbol.
"""
function pygui()
global gui
if gui::Symbol != :default && pygui_works(gui::Symbol)
return gui::Symbol
else
for g in (:tk, :qt, :wx, :gtk, :gtk3)
if pygui_works(g)
gui = g
return gui::Symbol
end
end
error("No supported Python GUI toolkit is installed.")
end
end
function pygui(g::Symbol)
global gui
if g != gui::Symbol
if !(g in (:wx,:gtk,:gtk3,:tk,:qt,:qt4,:qt5,:qt_pyqt5,:qt_pyqt4,:qt_pyside,:qt_pyside2,:default))
throw(ArgumentError("invalid gui $g"))
elseif !pygui_works(g)
error("Python GUI toolkit for $g is not installed.")
end
gui = g
end
return g::Symbol
end
############################################################################
# Event loops for various toolkits.
# call doevent(status) every sec seconds
function install_doevent(doevent, sec::Real)
return Base.Timer(doevent, sec, interval=sec)
end
# For PyPlot issue #181: recent pygobject releases emit a warning
# if we don't specify which version we want:
function gtk_requireversion(gtkmodule::AbstractString, vers::VersionNumber=v"3.0")
if startswith(gtkmodule, "gi.")
gi = pyimport("gi")
if pyconvert(Bool, gi.get_required_version("Gtk") == Py(nothing))
gi.require_version("Gtk", string(vers))
end
end
end
# GTK (either GTK2 or GTK3, depending on gtkmodule):
function gtk_eventloop(gtkmodule::AbstractString, sec::Real=50e-3)
gtk_requireversion(gtkmodule)
gtk = pyimport(gtkmodule)
events_pending = gtk.events_pending
main_iteration = gtk.main_iteration
install_doevent(sec) do async
# handle all pending
while pyconvert(Bool, events_pending())
main_iteration()
end
end
end
# As discussed in PyPlot.jl#278, Qt looks for a file qt.conf in
# the same path as the running executable, which tells it where
# to find plugins etcetera. For Python's Qt, however, this will
# be in the path of the python executable, not julia. Furthermore,
# we can't copy it to the location of julia (even if that location
# is writable) because that would assume that all julia programs
# use the same version of Qt (e.g. via the same Python), which
# is not necessarily the case, and even then it wouldn't work
# for other programs linking libjulia. Unfortunately, there
# seems to be no way to change this. However, we can at least
# use set QT_PLUGIN_PATH by parsing qt.conf ourselves, and
# this seems to fix some of the path-related problems on Windows.
# ... unfortunately, it seems fixqtpath has to be called before
# the Qt library is loaded.
function fixqtpath(qtconf=joinpath(dirname(pyprogramname),"qt.conf"))
haskey(ENV, "QT_PLUGIN_PATH") && return false
if isfile(qtconf)
for line in eachline(qtconf)
m = match(r"^\s*prefix\s*=(.*)$"i, line)
if m !== nothing
dir = strip(m.captures[1])
if startswith(dir, '"') && endswith(dir, '"')
dir = dir[2:end-1]
end
plugin_path = joinpath(dir, "plugins")
if isdir(plugin_path)
# for some reason I don't understand,
# if libpython has already been loaded, then
# we need to use Python's setenv rather than Julia's
pyimport("os").environ["QT_PLUGIN_PATH"] = realpath(plugin_path)
return true
end
end
end
end
return false
end
# Qt: (PyQt5, PyQt4, or PySide module)
function qt_eventloop(QtCore::Py, sec::Real=50e-3)
# `fixqtpath()` seems to not be working,
# https://github.com/JuliaPy/PythonPlot.jl/issues/17
#fixqtpath()
instance = QtCore.QCoreApplication.instance
AllEvents = QtCore.QEventLoop.AllEvents
processEvents = QtCore.QCoreApplication.processEvents
pop!(ENV, "QT_PLUGIN_PATH", "") # clean up environment
maxtime = Py(50)
pynothing = Py(nothing)
install_doevent(sec) do async
app = instance()
if pyconvert(Bool, app != pynothing)
app._in_event_loop = true
processEvents(AllEvents, maxtime)
end
end
end
qt_eventloop(QtModule::AbstractString, sec::Real=50e-3) =
qt_eventloop(pyimport("$QtModule.QtCore"), sec)
function qt_eventloop(sec::Real=50e-3)
for QtModule in ("PyQt5", "PyQt4", "PySide", "PySide2")
try
return qt_eventloop(QtModule, sec)
catch
end
end
error("no Qt module found")
end
# wx: (based on IPython/lib/inputhookwx.py, which is 3-clause BSD-licensed)
function wx_eventloop(sec::Real=50e-3)
wx = pyimport("wx")
GetApp = wx.GetApp
EventLoop = wx.EventLoop
EventLoopActivator = wx.EventLoopActivator
pynothing = Py(nothing)
install_doevent(sec) do async
app = GetApp()
if pyconvert(Bool, app != pynothing)
app._in_event_loop = true
evtloop = EventLoop()
ea = EventLoopActivator(evtloop)
Pending = evtloop.Pending
Dispatch = evtloop.Dispatch
while pyconvert(Bool, Pending())
Dispatch()
end
finalize(ea) # deactivate event loop
app.ProcessIdle()
end
end
end
# Tk: (Tkinter/tkinter module)
# based on https://github.com/ipython/ipython/blob/7.0.1/IPython/terminal/pt_inputhooks/tk.py
function Tk_eventloop(sec::Real=50e-3)
Tk = pyimport("tkinter")
_tkinter = pyimport("_tkinter")
flag = _tkinter.ALL_EVENTS | _tkinter.DONT_WAIT
root = pynothing = Py(nothing)
install_doevent(sec) do async
new_root = Tk._default_root
if pyconvert(Bool, new_root != pynothing)
root = new_root
end
if pyconvert(Bool, root != pynothing)
while pyconvert(Bool, root.dooneevent(flag))
end
end
end
end
# cache running event loops (so that we don't start any more than once)
const eventloops = Dict{Symbol,Timer}()
"""
pygui_start(gui::Symbol = pygui())
Start the event loop of a certain toolkit.
The argument `gui` defaults to the current default GUI, but it could be `:wx`, `:gtk`, `:gtk3`, `:tk`, or `:qt`.
"""
function pygui_start(gui::Symbol=pygui(), sec::Real=50e-3)
pygui(gui)
if !haskey(eventloops, gui)
if gui == :wx
eventloops[gui] = wx_eventloop(sec)
elseif gui == :gtk
eventloops[gui] = gtk_eventloop("gtk", sec)
elseif gui == :gtk3
eventloops[gui] = gtk_eventloop("gi.repository.Gtk", sec)
elseif gui == :tk
eventloops[gui] = Tk_eventloop(sec)
elseif gui == :qt_pyqt4
eventloops[gui] = qt_eventloop("PyQt4", sec)
elseif gui == :qt_pyqt5
eventloops[gui] = qt_eventloop("PyQt5", sec)
elseif gui == :qt_pyside
eventloops[gui] = qt_eventloop("PySide", sec)
elseif gui == :qt_pyside2
eventloops[gui] = qt_eventloop("PySide2", sec)
elseif gui == :qt4
try
eventloops[gui] = qt_eventloop("PyQt4", sec)
catch
eventloops[gui] = qt_eventloop("PySide", sec)
end
elseif gui == :qt5
try
eventloops[gui] = qt_eventloop("PyQt5", sec)
catch
eventloops[gui] = qt_eventloop("PySide2", sec)
end
elseif gui == :qt
eventloops[gui] = qt_eventloop(sec)
else
throw(ArgumentError("unsupported GUI type $gui"))
end
end
gui
end
"""
pygui_stop(gui::Symbol = pygui())
Stop any running event loop for gui. The `gui` argument defaults to current default GUI.
"""
function pygui_stop(gui::Symbol=pygui())
if haskey(eventloops, gui)
Base.close(pop!(eventloops, gui))
true
else
false
end
end
pygui_stop_all() = for gui in keys(eventloops); pygui_stop(gui); end
############################################################################
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | code | 1771 | ENV["MPLBACKEND"]="agg" # no GUI
using PythonPlot, PythonCall, Test
pyversion = pyconvert(String, pyimport("sys").version)
@info("PythonPlot is using Matplotlib $(PythonPlot.version) with Python $pyversion")
plot(1:5, 2:6, "ro-")
line = gca().lines[0]
@test PyArray(line.get_xdata()) == 1:5
@test PyArray(line.get_ydata()) == 2:6
fig = gcf()
@test isa(fig, PythonPlot.Figure)
if PythonPlot.version >= v"2"
@test PyArray(fig.get_size_inches()) ≈ [6.4, 4.8]
else # matplotlib 1.3
@test PyArray(fig.get_size_inches()) ≈ [8, 6]
end
# with Matplotlib 1.3, I get "UserWarning: bbox_inches option for ps backend is not implemented yet"
if PythonPlot.version >= v"2"
s = sprint(show, "application/postscript", fig);
# m = match(r"%%BoundingBox: *([0-9]+) +([0-9]+) +([0-9]+) +([0-9]+)", s)
m = match(r"%%BoundingBox: *([0-9]+\.?[0-9]*) +([0-9]+\.?[0-9]*) +([0-9]+\.?[0-9]*) +([0-9]+\.?[0-9]*)", s)
@test m !== nothing
boundingbox = map(s -> parse(Float64, s), m.captures)
@info("got plot bounding box ", boundingbox)
@test all([300, 200] .< boundingbox[3:4] - boundingbox[1:2] .< [450,350])
end
c = get_cmap("RdBu")
a = 0.0:0.25:1.0
rgba = PyArray(pycall(PythonPlot.ScalarMappable; cmap=c, norm=PythonPlot.Normalize01).to_rgba(a))
@test rgba ≈ [ 0.403921568627451 0.0 0.12156862745098039 1.0
0.8991926182237601 0.5144175317185697 0.4079200307574009 1.0
0.9657054978854287 0.9672433679354094 0.9680891964628989 1.0
0.4085351787773935 0.6687427912341408 0.8145328719723184 1.0
0.0196078431372549 0.18823529411764706 0.3803921568627451 1.0 ]
@testset "close figure: issue 22" begin
f = figure()
@test close(f) === nothing
end
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 1.0.5 | 6ec334b52f9547f84fde749e512a97d0372ae395 | docs | 14042 | # The PythonPlot module for Julia
[](https://github.com/JuliaPy/PythonPlot.jl/actions?query=workflow%3ACI)
This module provides a Julia interface to the
[Matplotlib](http://matplotlib.org/) plotting library from Python, and
specifically to the `matplotlib.pyplot` module.
PythonPlot uses the Julia [PythonCall.jl](https://github.com/cjdoris/PythonCall.jl)
package to call Matplotlib directly from Julia with little or no
overhead (arrays are passed without making a copy). It is based on a fork of the [PyPlot.jl](https://github.com/JuliaPy/PyPlot.jl) package, which uses the older [PyCall.jl](https://github.com/JuliaPy/PyCall.jl) interface to Python, and is intended to function as a mostly drop-in replacement for PyPlot.jl.
This package takes advantage of Julia's [multimedia
I/O](https://docs.julialang.org/en/latest/base/io-network/#Multimedia-I/O-1)
API to display plots in any Julia graphical backend, including as
inline graphics in [IJulia](https://github.com/JuliaLang/IJulia.jl).
Alternatively, you can use a Python-based graphical Matplotlib
backend to support interactive plot zooming etcetera.
## Installation
The PythonPlot package uses the [CondaPkg.jl](https://github.com/cjdoris/CondaPkg.jl) package to automatically install Matplotlib as needed.
(If you configure PythonCall to use some custom Python installation, you will need to install Matplotlib yourself.)
You can either
do inline plotting with [IJulia](https://github.com/JuliaLang/IJulia.jl),
which doesn't require a GUI backend, or use the Qt, wx, or GTK+ backends
of Matplotlib as described below.
## Basic usage
Once Matplotlib and PythonPlot are installed, and you are using a
graphics-capable Julia environment such as IJulia, you can simply type
`using PythonPlot` and begin calling functions in the
[matplotlib.pyplot](http://matplotlib.org/api/pyplot_api.html) API.
For example:
```
using PythonPlot
x = range(0; stop=2*pi, length=1000); y = sin.(3 * x + 4 * cos.(2 * x));
plot(x, y, color="red", linewidth=2.0, linestyle="--")
title("A sinusoidally modulated sinusoid")
```
In general, all of the arguments, including keyword arguments, are
exactly the same as in Python. (With minor translations, of course,
e.g. Julia uses `true` and `nothing` instead of Python's `True` and
`None`.)
The full `matplotlib.pyplot` API is far too extensive to describe here;
see the [matplotlib.pyplot documentation for more
information](http://matplotlib.org/api/pyplot_api.html). The Matplotlib
version number is returned by `PythonPlot.version`.
### Differences from PyPlot.jl
Compared to the PyPlot.jl package, there are a few differences in the API.
* To avoid type piracy, the functions `show`, `close`, `step`, and `fill` are renamed to `plotshow`, `plotclose`, `plotstep`, and `plotfill`, respectively. (You can also access them as `PythonPlot.show` etcetera.)
* The `matplotlib.pyplot` module is exported as `pyplot` rather than as `plt`.
* The PythonCall package performs many fewer automatic conversions from Python types to Julia types (in comparison to PyCall). If you need to convert Matplotlib return values to native Julia objects, you'll need to do `using PythonCall` and call its `pyconvert(T, o)` or other conversion functions.
### Exported functions
Only the currently documented `matplotlib.pyplot` API is exported. To use
other functions in the module, you can also call `matplotlib.pyplot.foo(...)`
as `pyplot.foo(...)`. For example, `pyplot.plot(x, y)` also works. (And
the raw `Py` object for the `matplotlib` module itself is also accessible
as `PythonPlot.matplotlib`.)
Matplotlib is somewhat inconsistent about capitalization: it has
`contour3D` but `bar3d`, etcetera. PyPlot renames all such functions
to use a capital *D* (e.g. it has `hist2D`, `bar3D`, and so on).
You must also explicitly qualify some functions
built-in Julia functions. In particular, `PythonPlot.xcorr`,
`PythonPlot.axes`, and `PythonPlot.isinteractive`
must be used to access `matplotlib.pyplot.xcorr`
etcetera.
If you wish to access *all* of the PyPlot functions exclusively
through `pyplot.somefunction(...)`, as is conventional in Python, you can
do `import PythonPlot as pyplot` instead of `using PythonPlot`.
### Figure objects
You can get the current figure as a `Figure` object (a wrapper
around `matplotlib.pyplot.Figure`) by calling `gcf()`.
The `Figure` type supports Julia's [multimedia I/O
API](https://docs.julialang.org/en/latest/base/io-network/#Multimedia-I/O-1),
so you can use `display(fig)` to show a `fig::PyFigure` and
`show(io, mime, fig)` (or `writemime` in Julia 0.4) to write it to a given `mime` type string
(e.g. `"image/png"` or `"application/pdf"`) that is supported by the
Matplotlib backend.
## Non-interactive plotting
If you use PythonPlot from an interactive Julia prompt, such as the Julia
[command-line prompt](http://docs.julialang.org/en/latest/manual/interacting-with-julia/)
or an IJulia notebook, then plots appear immediately after a plotting
function (`plot` etc.) is evaluated.
However, if you use PythonPlot from a Julia script that is run non-interactively
(e.g. `julia myscript.jl`), then Matplotlib is executed in
[non-interactive mode](http://matplotlib.org/faq/usage_faq.html#what-is-interactive-mode):
a plot window is not opened until you run `plotshow()` (equivalent to `pyplot.show()`
in the Python examples).
## Interactive versus Julia graphics
PythonPlot can use any Julia graphics backend capable of displaying PNG,
SVG, or PDF images, such as the IJulia environment. To use a
different backend, simply call `pushdisplay` with the desired
`Display`; see the [Julia multimedia display
API](https://docs.julialang.org/en/latest/base/io-network/#Multimedia-I/O-1)
for more detail.
On the other hand, you may wish to use one of the Python Matplotlib
backends to open an interactive window for each plot (for interactive
zooming, panning, etcetera). You can do this at any time by running:
```
pygui(true)
```
to turn on the Python-based GUI (if possible) for subsequent plots,
while `pygui(false)` will return to the Julia backend. Even when a
Python GUI is running, you can display the current figure with the
Julia backend by running `display(gcf())`.
If no Julia graphics backend is available when PythonPlot is imported, then
`pygui(true)` is the default.
### Choosing a Python GUI toolkit
Only the [Tk](http://www.tcl.tk/), [wxWidgets](http://www.wxwidgets.org/),
[GTK+](http://www.gtk.org/) (version 2 or 3), and [Qt](http://qt-project.org/) (version 4 or 5; via the PyQt5,
[PyQt4](http://wiki.python.org/moin/PyQt4) or
[PySide](http://qt-project.org/wiki/PySide)), Python GUI backends are
supported by PythonPlot. (Obviously, you must have installed one of these
toolkits for Python first.) By default, PythonPlot picks one of these
when it starts up (based on what you have installed), but you can
force a specific toolkit to be chosen by importing the PyCall module
and using its `pygui` function to set a Python backend *before*
importing PythonPlot:
```
using PyCall
pygui(gui)
using PythonPlot
```
where `gui` can currently be one of `:tk`, `:gtk3`, `:gtk`, `:qt5`, `:qt4`, `:qt`, or `:wx`. You can
also set a default via the Matplotlib `rcParams['backend']` parameter in your
[matplotlibrc](http://matplotlib.org/users/customizing.html) file.
## Color maps
The PythonPlot module also exports some functions and types based on the
[matplotlib.colors](http://matplotlib.org/api/colors_api.html) and
[matplotlib.cm](http://matplotlib.org/api/cm_api.html) modules to
simplify management of color maps (which are used to assign values to
colors in various plot types). In particular:
* `ColorMap`: a wrapper around the [matplotlib.colors.Colormap](http://matplotlib.org/api/colors_api.html#matplotlib.colors.Colormap) type. The following constructors are provided:
* `ColorMap{T<:Colorant}(name::String, c::AbstractVector{T}, n=256, gamma=1.0)` constructs an `n`-component colormap by [linearly interpolating](http://matplotlib.org/api/colors_api.html#matplotlib.colors.LinearSegmentedColormap) the colors in the array `c` of `Colorant`s (from the [ColorTypes.jl](https://github.com/JuliaGraphics/ColorTypes.jl) package). If you want a `name` to be constructed automatically, call `ColorMap(c, n=256, gamma=1.0)` instead. Alternatively, instead of passing an array of colors, you can pass a 3- or 4-column matrix of RGB or RGBA components, respectively (similar to [ListedColorMap](http://matplotlib.org/api/colors_api.html#matplotlib.colors.ListedColormap) in Matplotlib).
* Even more general color maps may be defined by passing arrays of (x,y0,y1) tuples for the red, green, blue, and (optionally) alpha components, as defined by the [matplotlib.colors.LinearSegmentedColormap](http://matplotlib.org/api/colors_api.html#matplotlib.colors.LinearSegmentedColormap) constructor, via: `ColorMap{T<:Real}(name::String, r::AbstractVector{(T,T,T)}, g::AbstractVector{(T,T,T)}, b::AbstractVector{(T,T,T)}, n=256, gamma=1.0)` or `ColorMap{T<:Real}(name::String, r::AbstractVector{(T,T,T)}, g::AbstractVector{(T,T,T)}, b::AbstractVector{(T,T,T)}, alpha::AbstractVector{(T,T,T)}, n=256, gamma=1.0)`
* `ColorMap(name::String)` returns an existing (registered) colormap, equivalent to [matplotlib.pyplot.get_cmap](http://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.get_cmap.html#matplotlib-pyplot-get-cmap)(`name`).
* `matplotlib.colors.Colormap` objects returned by Python functions are automatically converted to the `ColorMap` type.
* `get_cmap(name::String)` or `get_cmap(name::String, lut::Integer)` call the [matplotlib.pyplot.get_cmap](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.get_cmap.html#matplotlib-pyplot-get-cmap) function.
* `register_cmap(c::ColorMap)` or `register_cmap(name::String, c::ColorMap)` call the [matplotlib.colormaps.register](https://matplotlib.org/stable/api/cm_api.html#matplotlib.cm.ColormapRegistry.register) function.
* `get_cmaps()` returns a `Vector{ColorMap}` of the currently
registered colormaps.
Note that, given an SVG-supporting display environment like IJulia,
`ColorMap` and `Vector{ColorMap}` objects are displayed graphically;
try `get_cmaps()`!
## 3d Plotting
The PythonPlot package also imports functions from Matplotlib's
[mplot3d](http://matplotlib.org/mpl_toolkits/mplot3d/) toolkit.
Unlike Matplotlib, however, you can create 3d plots directly without
first creating an
[Axes3d](http://matplotlib.org/mpl_toolkits/mplot3d/api.html#axes3d)
object, simply by calling one of: `bar3D`, `contour3D`, `contourf3D`,
`plot3D`, `plot_surface`, `plot_trisurf`, `plot_wireframe`, or
`scatter3D` (as well as `text2D`, `text3D`), exactly like the
correspondingly named methods of
[Axes3d](http://matplotlib.org/mpl_toolkits/mplot3d/api.html#axes3d).
We also export the Matlab-like synonyms `surf` for `plot_surface` (or
`plot_trisurf` for 1d-array arguments) and `mesh` for
`plot_wireframe`. For example, you can do:
```
surf(rand(30,40))
```
to plot a random 30×40 surface mesh.
You can also explicitly create a subplot with 3d axes via, for
example, `subplot(111, projection="3d")`, exactly as in Matplotlib,
but you must first call the `using3D()` function to ensure that
mplot3d is loaded (this happens automatically for `plot3D` etc.).
The `Axes3D` constructor and the
[art3D](http://matplotlib.org/mpl_toolkits/mplot3d/api.html#art3d)
module are also exported.
## LaTeX plot labels
Matplotlib allows you to [use LaTeX equations in plot
labels](http://matplotlib.org/users/mathtext.html), titles, and so on
simply by enclosing the equations in dollar signs (`$ ... $`) within
the string. However, typing LaTeX equations in Julia string literals
is awkward because escaping is necessary to prevent Julia from
interpreting the dollar signs and backslashes itself; for example, the
LaTeX equation `$\alpha + \beta$` would be the literal string
`"\$\\alpha + \\beta\$"` in Julia.
To simplify this, PythonPlot uses the [LaTeXStrings package](https://github.com/stevengj/LaTeXStrings.jl) to provide a new `LaTeXString` type that
be constructed via `L"...."` without escaping backslashes or dollar
signs. For example, one can simply write `L"$\alpha + \beta$"` for the
abovementioned equation, and thus you can do things like:
```
title(L"Plot of $\Gamma_3(x)$")
```
If your string contains *only* equations, you can omit the dollar
signs, e.g. `L"\alpha + \beta"`, and they will be added automatically.
As an added benefit, a `LaTeXString` is automatically displayed as a
rendered equation in IJulia. See the LaTeXStrings package for more
information.
## SVG output in IJulia
By default, plots in IJulia are sent to the notebook as PNG images.
Optionally, you can tell PythonPlot to display plots in the browser as
[SVG](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) images,
which have the advantage of being resolution-independent (so that they
display without pixellation at high-resolutions, for example if you
convert an IJulia notebook to PDF), by running:
```
PythonPlot.svg(true)
```
This is not the default because SVG plots in the browser are much
slower to display (especially for complex plots) and may display
inaccurately in some browsers with buggy SVG support. The `PythonPlot.svg()`
method returns whether SVG display is currently enabled.
Note that this is entirely separate from manually exporting plots to SVG
or any other format. Regardless of whether PythonPlot uses SVG for
browser display, you can export a plot to SVG at any time by using the
Matplotlib
[savefig](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig)
command, e.g. `savefig("plot.svg")`.
## Modifying matplotlib.rcParams
You can mutate the `rcParams` dictionary that Matplotlib uses for global parameters following this example:
```jl
PythonPlot.matplotlib.rcParams["font.size"] = 15
```
## Author
This module was written by [Steven G. Johnson](http://math.mit.edu/~stevenj/).
| PythonPlot | https://github.com/JuliaPy/PythonPlot.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 783 | module CalibrationTests
using Reexport
@reexport using CalibrationErrors
using ConsistencyResampling: ConsistencyResampling
using HypothesisTests: HypothesisTests
@reexport using KernelFunctions
using LinearAlgebra: LinearAlgebra
using Random: Random
using Statistics: Statistics
using StatsFuns: StatsFuns
using StructArrays: StructArrays
using CalibrationErrors: CalibrationErrorEstimator, SKCE, unsafe_skce_eval
export ConsistencyTest
export DistributionFreeSKCETest, AsymptoticBlockSKCETest, AsymptoticSKCETest
export AsymptoticCMETest
# re-export
using HypothesisTests: pvalue, confint
export pvalue, confint
include("consistency.jl")
include("skce/asymptotic.jl")
include("skce/asymptotic_block.jl")
include("skce/distribution_free.jl")
include("cme.jl")
end # module
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 2537 | struct AsymptoticCMETest{K<:Kernel,V,M,S} <: HypothesisTests.HypothesisTest
"""Kernel."""
kernel::K
"""Number of observations."""
nsamples::Int
"""Number of test locations."""
ntestsamples::Int
"""UCME estimate."""
estimate::V
"""Mean deviation for each test location."""
mean_deviations::M
"""Test statistic."""
statistic::S
end
function AsymptoticCMETest(
estimator::UCME, predictions::AbstractVector, targets::AbstractVector
)
# determine number of observations and test locations
nsamples = length(predictions)
testpredictions = estimator.testpredictions
ntestsamples = length(testpredictions)
# create matrix of deviations (observations × test locations)
kernel = estimator.kernel
deviations =
CalibrationErrors.unsafe_ucme_eval.(
(kernel,),
predictions,
targets,
permutedims(testpredictions),
permutedims(estimator.testtargets),
)
# compute UCME estimate
mean_deviations = Statistics.mean(deviations; dims=1)
estimate = sum(abs2, mean_deviations) / ntestsamples
# compute test statistic
C = LinearAlgebra.Symmetric(Statistics.covm(deviations, mean_deviations))
x = vec(mean_deviations)
statistic = nsamples * LinearAlgebra.dot(x, C \ x)
return AsymptoticCMETest(kernel, nsamples, ntestsamples, estimate, x, statistic)
end
# HypothesisTests interface
HypothesisTests.default_tail(::AsymptoticCMETest) = :right
## have to specify and check keyword arguments in `pvalue` and `confint` to
## force `tail = :right` due to the default implementation in HypothesisTests
function HypothesisTests.pvalue(test::AsymptoticCMETest; tail=:right)
if tail === :right
StatsFuns.chisqccdf(test.ntestsamples, test.statistic)
else
throw(ArgumentError("tail=$(tail) is invalid"))
end
end
HypothesisTests.testname(::AsymptoticCMETest) = "Asymptotic CME test"
# parameter of interest: name, value under H0, point estimate
function HypothesisTests.population_param_of_interest(test::AsymptoticCMETest)
return "Mean vector", zero(test.mean_deviations), test.mean_deviations
end
function HypothesisTests.show_params(io::IO, test::AsymptoticCMETest, ident="")
println(io, ident, "number of observations: ", test.nsamples)
println(io, ident, "number of test locations: ", test.ntestsamples)
println(io, ident, "UCME estimate: ", test.estimate)
return println(io, ident, "test statistic: ", test.statistic)
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 2091 | struct ConsistencyTest{E<:CalibrationErrorEstimator,P,T,V} <: HypothesisTests.HypothesisTest
"""Calibration estimator."""
estimator::E
"""Predictions."""
predictions::P
"""Targets."""
targets::T
"""Calibration estimate."""
estimate::V
end
function ConsistencyTest(
estimator::CalibrationErrorEstimator,
predictions::AbstractVector,
targets::AbstractVector,
)
estimate = estimator(predictions, targets)
return ConsistencyTest(estimator, predictions, targets, estimate)
end
# HypothesisTests interface
HypothesisTests.default_tail(::ConsistencyTest) = :right
HypothesisTests.testname(::ConsistencyTest) = "Consistency resampling test"
function HypothesisTests.pvalue(test::ConsistencyTest; kwargs...)
return pvalue(Random.GLOBAL_RNG, test; kwargs...)
end
function HypothesisTests.pvalue(
rng::Random.AbstractRNG, test::ConsistencyTest; bootstrap_iters::Int=1_000
)
bootstrap_iters > 0 || error("number of bootstrap samples must be positive")
predictions = test.predictions
sampledpredictions = similar(predictions)
sampledtargets = similar(test.targets)
samples = StructArrays.StructArray((sampledpredictions, sampledtargets))
estimate = test.estimate
estimator = test.estimator
sampler = Random.Sampler(rng, ConsistencyResampling.Consistent(predictions))
n = 0
for _ in 1:bootstrap_iters
# perform consistency resampling
Random.rand!(rng, samples, sampler)
# evaluate the calibration error
sampledestimate = estimator(sampledpredictions, sampledtargets)
# check if the estimate for the resampled data is ≥ the original estimate
if sampledestimate ≥ estimate
n += 1
end
end
return n / bootstrap_iters
end
# parameter of interest: name, value under H0, point estimate
function HypothesisTests.population_param_of_interest(test::ConsistencyTest)
return nameof(typeof(test.estimator)), zero(test.estimate), test.estimate
end
HypothesisTests.show_params(io::IO, test::ConsistencyTest, ident="") = nothing
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 10383 | @doc raw"""
AsymptoticSKCETest(kernel::Kernel, predictions, targets)
Calibration hypothesis test based on the unbiased estimator of the squared kernel
calibration error (SKCE) with quadratic sample complexity.
# Details
Let ``\mathcal{D} = (P_{X_i}, Y_i)_{i=1,\ldots,n}`` be a data set of predictions and
corresponding targets. Denote the null hypothesis "the predictive probabilistic model is
calibrated" with ``H_0``.
The hypothesis test approximates the p-value ``ℙ(\mathrm{SKCE}_{uq} > c \,|\, H_0)``, where
``\mathrm{SKCE}_{uq}`` is the unbiased estimator of the SKCE, defined as
```math
\frac{2}{n(n-1)} \sum_{1 \leq i < j \leq n} h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big),
```
where
```math
\begin{aligned}
h_k\big((μ, y), (μ', y')\big) ={}& k\big((μ, y), (μ', y')\big)
- 𝔼_{Z ∼ μ} k\big((μ, Z), (μ', y')\big) \\
& - 𝔼_{Z' ∼ μ'} k\big((μ, y), (μ', Z')\big)
+ 𝔼_{Z ∼ μ, Z' ∼ μ'} k\big((μ, Z), (μ', Z')\big).
\end{aligned}
```
The p-value is estimated based on the asymptotically valid approximation
```math
ℙ(n\mathrm{SKCE}_{uq} > c \,|\, H_0) \approx ℙ(T > c \,|\, \mathcal{D}),
```
where ``T`` is the bootstrap statistic
```math
T = \frac{2}{n} \sum_{1 \leq i < j \leq n} \bigg(h_k\big((P^*_{X_i}, Y^*_i), (P^*_{X_j}, Y^*_j)\big)
- \frac{1}{n} \sum_{r = 1}^n h_k\big((P^*_{X_i}, Y^*_i), (P_{X_r}, Y_r)\big)
- \frac{1}{n} \sum_{r = 1}^n h_k\big((P_{X_r}, Y_r), (P^*_{X_j}, Y^*_j)\big)
+ \frac{1}{n^2} \sum_{r, s = 1}^n h_k\big((P_{X_r}, Y_r), (P_{X_s}, Y_s)\big)\bigg)
```
for bootstrap samples ``(P^*_{X_i}, Y^*_i)_{i=1,\ldots,n}`` of ``\mathcal{D}``.
This can be reformulated to the approximation
```math
ℙ(n\mathrm{SKCE}_{uq}/(n - 1) - \mathrm{SKCE}_b > c \,|\, H_0) \approx ℙ(T' > c \,|\, \mathcal{D}),
```
where
```math
\mathrm{SKCE}_b = \frac{1}{n^2} \sum_{i, j = 1}^n h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big)
```
and
```math
T' = \frac{2}{n(n - 1)} \sum_{1 \leq i < j \leq n} h_k\big((P^*_{X_i}, Y^*_i), (P^*_{X_j}, Y^*_j)\big)
- \frac{2}{n^2} \sum_{i, r=1}^n h_k\big((P^*_{X_i}, Y^*_i), (P_{X_r}, Y_r)\big).
```
# References
Widmann, D., Lindsten, F., & Zachariah, D. (2019). [Calibration tests in multi-class
classification: A unifying framework](https://proceedings.neurips.cc/paper/2019/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html).
In: Advances in Neural Information Processing Systems (NeurIPS 2019) (pp. 12257–12267).
Widmann, D., Lindsten, F., & Zachariah, D. (2021). [Calibration tests beyond
classification](https://openreview.net/forum?id=-bxf89v3Nx).
"""
struct AsymptoticSKCETest{K<:Kernel,E,V,M} <: HypothesisTests.HypothesisTest
"""Kernel."""
kernel::K
"""Calibration error estimate."""
estimate::E
"""Test statistic."""
statistic::V
"""Symmetric kernel matrix, consisting of pairwise evaluations of ``h_{ij}``."""
kernelmatrix::M
end
function AsymptoticSKCETest(
kernel::Kernel, predictions::AbstractVector, targets::AbstractVector
)
# compute the calibration error estimate, the test statistic, and the kernel matrix
estimate, statistic, kernelmatrix = estimate_statistic_kernelmatrix(
kernel, predictions, targets
)
return AsymptoticSKCETest(kernel, estimate, statistic, kernelmatrix)
end
# HypothesisTests interface
HypothesisTests.default_tail(::AsymptoticSKCETest) = :right
function HypothesisTests.pvalue(test::AsymptoticSKCETest; kwargs...)
return pvalue(Random.GLOBAL_RNG, test; kwargs...)
end
function HypothesisTests.pvalue(
rng::Random.AbstractRNG, test::AsymptoticSKCETest; bootstrap_iters::Int=1_000
)
return bootstrap_ccdf(rng, test.statistic, test.kernelmatrix, bootstrap_iters)
end
HypothesisTests.testname(::AsymptoticSKCETest) = "Asymptotic SKCE test"
# parameter of interest: name, value under H0, point estimate
function HypothesisTests.population_param_of_interest(test::AsymptoticSKCETest)
return "SKCE", zero(test.estimate), test.estimate
end
function HypothesisTests.show_params(io::IO, test::AsymptoticSKCETest, ident="")
return println(io, ident, "test statistic: $(test.statistic)")
end
@doc raw"""
estimate_statistic_kernelmatrix(kernel, predictions, targets)
Compute the estimate of the SKCE, the test statistic, and the matrix of the evaluations of
the kernel function.
# Details
Let ``\mathcal{D} = (P_{X_i}, Y_i)_{i=1,\ldots,n}`` be a data set of predictions and
corresponding targets.
The unbiased estimator ``\mathrm{SKCE}_{uq}`` of the SKCE is defined as
```math
\frac{2}{n(n-1)} \sum_{1 \leq i < j \leq n} h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big),
```
where
```math
\begin{aligned}
h_k\big((μ, y), (μ', y')\big) ={}& k\big((μ, y), (μ', y')\big)
- 𝔼_{Z ∼ μ} k\big((μ, Z), (μ', y')\big) \\
& - 𝔼_{Z' ∼ μ'} k\big((μ, y), (μ', Z')\big)
+ 𝔼_{Z ∼ μ, Z' ∼ μ'} k\big((μ, Z), (μ', Z')\big).
\end{aligned}
```
The test statistic is defined as
```math
\frac{n}{n-1} \mathrm{SKCE}_{uq} - \mathrm{SKCE}_b,
```
where
```math
\mathrm{SKCE}_b = \frac{1}{n^2} \sum_{i, j = 1}^n h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big)
```
(see [`AsymptoticSKCETest`](@ref)). This is equivalent to
```math
\frac{1}{n^2} \sum_{i, j = 1} h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big) \bigg(\frac{n^2}{(n - 1)^2} 1(i \neq j) - \bigg).
```
The kernelmatrix ``K \in \mathbb{R}^{n \times n}`` is defined as
```math
K_{ij} = h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big)
```
for ``i, j \in \{1, \ldots, n\}``.
"""
function estimate_statistic_kernelmatrix(kernel, predictions, targets)
# obtain number of samples
nsamples = length(predictions)
nsamples > 1 || error("there must be at least two samples")
# pre-computations
α = (2 * nsamples - 1) / (nsamples - 1)^2
@inbounds begin
# evaluate the kernel function for the first pair of samples
prediction = predictions[1]
target = targets[1]
# initialize the kernel matrix
hij = unsafe_skce_eval(kernel, prediction, target, prediction, target)
kernelmatrix = Matrix{typeof(hij)}(undef, nsamples, nsamples)
kernelmatrix[1, 1] = hij
# initialize the test statistic and the unbiased estimate of the SKCE
statistic = -hij / 1
estimate = zero(statistic)
# add evaluations of all other pairs of samples
nstatistic = 1
nestimate = 0
for i in 2:nsamples
predictioni = predictions[i]
targeti = targets[i]
for j in 1:(i - 1)
predictionj = predictions[j]
targetj = targets[j]
# evaluate the kernel function
hij = unsafe_skce_eval(kernel, predictioni, targeti, predictionj, targetj)
# update the kernel matrix
kernelmatrix[j, i] = hij
# update the estimate and the test statistic
nstatistic += 2
statistic += 2 * (α * hij - statistic) / nstatistic
nestimate += 1
estimate += (hij - estimate) / nestimate
end
# evaluate the kernel function for the `i`th sample
hij = unsafe_skce_eval(kernel, predictioni, targeti, predictioni, targeti)
# update the kernel matrix
kernelmatrix[i, i] = hij
# update the test statistic
nstatistic += 1
statistic -= (statistic + hij) / nstatistic
end
end
# add lower triangle of the kernel matrix
LinearAlgebra.copytri!(kernelmatrix, 'U')
return estimate, statistic, kernelmatrix
end
@doc raw"""
bootstrap_ccdf(rng::AbstractRNG, statistic, kernelmatrix, bootstrap_iters::Int)
Estimate the value of the inverse CDF of the test statistic under the calibration null
hypothesis by bootstrapping.
# Details
Let ``\mathcal{D} = (P_{X_i}, Y_i)_{i=1,\ldots,n}`` be a data set of predictions and
corresponding targets. Denote the null hypothesis "the predictive probabilistic model is
calibrated" with ``H_0``, and the test statistic with ``T``.
The value of the inverse CDF under the null hypothesis is estimated based on the
asymptotically valid approximation
```math
ℙ(T > c \,|\, H_0) \approx ℙ(T' > c \,|\, \mathcal{D}),
```
where the bootstrap statistic ``T'`` is defined as
```math
T' = \frac{2}{n(n - 1)} \sum_{1 \leq i < j \leq n} h_k\big((P^*_{X_i}, Y^*_i), (P^*_{X_j}, Y^*_j)\big)
- \frac{2}{n^2} \sum_{i, r=1}^n h_k\big((P^*_{X_i}, Y^*_i), (P_{X_r}, Y_r)\big)
```
for bootstrap samples ``(P^*_{X_i}, Y^*_i)_{i=1,\ldots,n}`` of ``\mathcal{D}``
(see [`AsymptoticSKCETest`](@ref)).
Let ``C_i`` be the number of times that data pair ``(P_{X_i}, Y_i)`` was resampled.
Then we obtain
```math
T' = \frac{1}{n^2} \sum_{i=1}^n C_i \sum_{j=1}^n \bigg(\frac{n}{n-1} (C_j - \delta_{i,j}) - 2\bigg) h_k\big((P_{X_i}, Y_i), (P_{X_j}, Y_j)\big).
```
"""
function bootstrap_ccdf(
rng::Random.AbstractRNG, statistic, kernelmatrix, bootstrap_iters::Int
)
# initialize array of counts of resampled indices
nsamples = LinearAlgebra.checksquare(kernelmatrix)
resampling_counts = Vector{Int}(undef, nsamples)
# for each bootstrap sample
α = nsamples / (nsamples - 1)
extreme_count = 0
sampler = Random.Sampler(rng, 1:nsamples)
for _ in 1:bootstrap_iters
# resample data set
fill!(resampling_counts, 0)
for _ in 1:nsamples
idx = rand(rng, sampler)
@inbounds resampling_counts[idx] += 1
end
# evaluate the bootstrap statistic
z = zero(statistic)
n = 0
for i in 1:nsamples
# check if the `i`th data pair was sampled
ci = resampling_counts[i]
iszero(ci) && continue
zi = Statistics.mean(enumerate(resampling_counts)) do (j, cj)
# obtain evaluation of the kernel function
@inbounds hij = kernelmatrix[j, i]
return ((cj - (i == j)) * α - 2) * hij
end
# update bootstrap statistic
n += ci
z += ci * (zi - z) / n
end
# check if the bootstrap statistic is ≥ the original statistic
if z ≥ statistic
extreme_count += 1
end
end
return extreme_count / bootstrap_iters
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3288 | struct AsymptoticBlockSKCETest{K<:Kernel,E,S,Z} <: HypothesisTests.ZTest
"""Kernel."""
kernel::K
"""Number of observations per block."""
blocksize::Int
"""Number of blocks of observations."""
nblocks::Int
"""Calibration error estimate (average of evaluations of blocks of observations)."""
estimate::E
"""Standard error of evaluations of blocks of observations."""
stderr::S
"""z-statistic."""
z::Z
end
function AsymptoticBlockSKCETest(
kernel::Kernel, blocksize::Int, predictions::AbstractVector, targets::AbstractVector
)
# obtain number of samples
nsamples = length(predictions)
nsamples ≥ blocksize || error("there must be at least ", blocksize, " samples")
# compute number of blocks
nblocks = nsamples ÷ blocksize
nblocks ≥ 2 || error("there must be at least 2 blocks")
# evaluate U-statistic of the first block
istart = 1
iend = blocksize
x = SKCE(kernel)(view(predictions, istart:iend), view(targets, istart:iend))
# initialize the estimate and the sum of squares
estimate = x / 1
S = zero(estimate)^2
# for all other blocks
for b in 2:nblocks
# evaluate U-statistic
istart += blocksize
iend += blocksize
x = SKCE(kernel)(view(predictions, istart:iend), view(targets, istart:iend))
# update the estimate
Δestimate = x - estimate
estimate += Δestimate / b
S += Δestimate * (x - estimate)
end
# compute standard error and z-statistic
stderr = sqrt(S / (nblocks * (nblocks - 1)))
z = estimate / stderr
return AsymptoticBlockSKCETest(kernel, blocksize, nblocks, estimate, stderr, z)
end
# HypothesisTests interface
HypothesisTests.default_tail(::AsymptoticBlockSKCETest) = :right
## have to specify and check keyword arguments in `pvalue` and `confint` to
## force `tail = :right` due to the default implementation in HypothesisTests
function HypothesisTests.pvalue(test::AsymptoticBlockSKCETest; tail=:right)
if tail === :right
StatsFuns.normccdf(test.z)
else
throw(ArgumentError("tail=$(tail) is invalid"))
end
end
# confidence interval by inversion
function HypothesisTests.confint(test::AsymptoticBlockSKCETest; level=0.95, tail=:right)
HypothesisTests.check_level(level)
if tail === :right
q = StatsFuns.norminvcdf(level)
lowerbound = test.estimate - q * test.stderr
(max(zero(lowerbound), lowerbound), oftype(lowerbound, Inf))
else
throw(ArgumentError("tail = $(tail) is invalid"))
end
end
HypothesisTests.testname(test::AsymptoticBlockSKCETest) = "Asymptotic block SKCE test"
# parameter of interest: name, value under H0, point estimate
function HypothesisTests.population_param_of_interest(test::AsymptoticBlockSKCETest)
return "SKCE", zero(test.estimate), test.estimate
end
function HypothesisTests.show_params(io::IO, test::AsymptoticBlockSKCETest, ident="")
println(io, ident, "number of observations per block: ", test.blocksize)
println(io, ident, "number of blocks of observations: ", test.nblocks)
println(io, ident, "z-statistic: ", test.z)
return println(
io, ident, "standard error of evaluations of pairs of observations: ", test.stderr
)
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 2565 | struct DistributionFreeSKCETest{E<:SKCE,B,V} <: HypothesisTests.HypothesisTest
"""Calibration estimator."""
estimator::E
"""Uniform upper bound of the terms of the estimator."""
bound::B
"""Number of observations."""
n::Int
"""Calibration error estimate."""
estimate::V
end
function DistributionFreeSKCETest(
estimator::SKCE,
predictions::AbstractVector,
targets::AbstractVector;
bound=uniformbound(estimator),
)
estimate = estimator(predictions, targets)
return DistributionFreeSKCETest(estimator, bound, length(predictions), estimate)
end
# HypothesisTests interface
HypothesisTests.default_tail(::DistributionFreeSKCETest) = :right
function HypothesisTests.pvalue(test::DistributionFreeSKCETest)
estimator = test.estimator
if estimator.unbiased &&
(estimator.blocksize === identity || estimator.blocksize isa Integer)
return exp(-div(test.n, 2) * (test.estimate / test.bound)^2 / 2)
elseif !estimator.unbiased && estimator.blocksize === identity
s = sqrt(test.n * test.estimate / test.bound) - 1
return exp(-max(s, zero(s))^2 / 2)
else
error("estimator is not supported")
end
end
HypothesisTests.testname(::DistributionFreeSKCETest) = "Distribution-free SKCE test"
# parameter of interest: name, value under H0, point estimate
function HypothesisTests.population_param_of_interest(test::DistributionFreeSKCETest)
return nameof(typeof(test.estimator)), zero(test.estimate), test.estimate
end
function HypothesisTests.show_params(io::IO, test::DistributionFreeSKCETest, ident="")
println(io, ident, "number of observations: $(test.n)")
return println(io, ident, "uniform bound of the terms of the estimator: $(test.bound)")
end
# uniform bound `B_{p;q}` of the absolute value of the terms in the estimators
uniformbound(kce::SKCE) = 2 * uniformbound(kce.kernel)
# uniform bounds of the norm of base kernels
uniformbound(kernel::ExponentialKernel) = 1
uniformbound(kernel::SqExponentialKernel) = 1
uniformbound(kernel::WhiteKernel) = 1
# uniform bound of the norm of a scaled kernel
uniformbound(kernel::ScaledKernel) = first(kernel.σ²) * uniformbound(kernel.kernel)
# uniform bound of the norm of a kernel with input transformations
# assume transform is bijective (i.e., transform does not affect the bound) as default
uniformbound(kernel::TransformedKernel) = uniformbound(kernel.kernel)
# uniform bounds of the norm of tensor product kernels
uniformbound(kernel::KernelTensorProduct) = prod(uniformbound, kernel.kernels)
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 496 | @testset "Aqua" begin
# Test ambiguities separately without Base and Core
# Ref: https://github.com/JuliaTesting/Aqua.jl/issues/77
# Only test Project.toml formatting on Julia > 1.6 when running Github action
# Ref: https://github.com/JuliaTesting/Aqua.jl/issues/105
Aqua.test_all(
CalibrationTests;
ambiguities=false,
project_toml_formatting=VERSION >= v"1.7" || !haskey(ENV, "GITHUB_ACTIONS"),
)
Aqua.test_ambiguities([CalibrationTests])
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3763 | @testset "binary_trend" begin
# sample data
function generate_binary_data(rng::Random.AbstractRNG, nsamples::Int)
# generate predictions
dist = Dirichlet(2, 1)
predictions = [rand(rng, dist) for _ in 1:nsamples]
# generate targets
targets_consistent = [rand(rng) < predictions[i][1] ? 1 : 2 for i in 1:nsamples]
targets_onlytwo = fill(2, nsamples)
return (predictions, targets_consistent), (predictions, targets_onlytwo)
end
data_consistent, data_only_two = generate_binary_data(StableRNG(18732), 500)
# define tensor product kernel (using the mean total variation distance as bandwidth)
kernel = (ExponentialKernel() ∘ ScaleTransform(3)) ⊗ WhiteKernel()
@testset "Consistency test" begin
# define estimators
estimators = (
SKCE(kernel),
SKCE(kernel; unbiased=false),
(SKCE(kernel; blocksize=b) for b in (2, 10, 50, 100))...,
)
for estimator in estimators
test_consistent = @inferred(ConsistencyTest(estimator, data_consistent...))
@test @inferred(pvalue(test_consistent)) > 0.1
print(test_consistent)
test_only_two = @inferred(ConsistencyTest(estimator, data_only_two...))
@test @inferred(pvalue(test_only_two)) < 1e-6
print(test_only_two)
end
end
@testset "Distribution-free tests" begin
# define estimators
estimators = (
SKCE(kernel),
SKCE(kernel; unbiased=false),
(SKCE(kernel; blocksize=b) for b in (2, 10, 50, 100))...,
)
for estimator in estimators
test_consistent = @inferred(
DistributionFreeSKCETest(estimator, data_consistent...)
)
@test @inferred(pvalue(test_consistent)) > 0.7
println(test_consistent)
test_only_two = @inferred(DistributionFreeSKCETest(estimator, data_only_two...))
@test @inferred(pvalue(test_only_two)) < (estimator.unbiased ? 0.4 : 1e-6)
println(test_only_two)
end
end
@testset "Asymptotic block SKCE test" begin
for blocksize in (2, 10, 50, 100)
test_consistent = @inferred(
AsymptoticBlockSKCETest(kernel, blocksize, data_consistent...)
)
@test @inferred(pvalue(test_consistent)) > 0.2
println(test_consistent)
test_only_two = @inferred(
AsymptoticBlockSKCETest(kernel, blocksize, data_only_two...)
)
@test @inferred(pvalue(test_only_two)) < 1e-6
println(test_only_two)
end
end
@testset "Asymptotic SKCE test" begin
test_consistent = @inferred(AsymptoticSKCETest(kernel, data_consistent...))
@test @inferred(pvalue(test_consistent)) > 0.3
println(test_consistent)
test_only_two = @inferred(AsymptoticSKCETest(kernel, data_only_two...))
@test @inferred(pvalue(test_only_two)) < 1e-6
println(test_only_two)
end
@testset "Asymptotic CME test" begin
# define estimator (uniformly distributed test locations)
rng = StableRNG(6789)
testpredictions = [rand(rng, Dirichlet(2, 1)) for _ in 1:10]
testtargets = rand(rng, 1:2, 10)
estimator = @inferred(UCME(kernel, testpredictions, testtargets))
test_consistent = @inferred(AsymptoticCMETest(estimator, data_consistent...))
@test @inferred(pvalue(test_consistent)) > 0.1
println(test_consistent)
test_only_two = @inferred(AsymptoticCMETest(estimator, data_only_two...))
@test @inferred(pvalue(test_only_two)) < 1e-6
println(test_only_two)
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3864 | @testset "cme.jl" begin
@testset "estimate and statistic" begin
kernel = (ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel()
function deviation(testprediction, testtarget, prediction, target)
testlocation = (testprediction, testtarget)
return mapreduce(+, prediction, 1:length(prediction)) do p, t
((t == target) - p) * kernel(testlocation, (prediction, t))
end
end
for nclasses in (2, 10, 100), nsamples in (10, 50, 100)
# define estimator (sample test locations uniformly)
dist = Dirichlet(nclasses, 1)
testpredictions = [rand(dist) for _ in 1:(nsamples ÷ 10)]
testtargets = rand(1:nclasses, nsamples ÷ 10)
estimator = UCME(kernel, testpredictions, testtargets)
# sample predictions and targets
predictions = [rand(dist) for _ in 1:nsamples]
targets_consistent = [
rand(Categorical(prediction)) for prediction in predictions
]
targets_onlyone = ones(Int, length(predictions))
# compute calibration error estimate and test statistic
for targets in (targets_consistent, targets_onlyone)
test = AsymptoticCMETest(estimator, predictions, targets)
@test test.kernel == kernel
@test test.nsamples == nsamples
@test test.ntestsamples == nsamples ÷ 10
@test test.estimate ≈ estimator(predictions, targets)
deviations =
deviation.(testpredictions', testtargets', predictions, targets)
mean_deviations = vec(mean(deviations; dims=1))
@test test.mean_deviations ≈ mean_deviations
# use of `inv` can lead to slightly different results
S = cov(deviations; dims=1)
statistic = nsamples * mean_deviations' * inv(S) * mean_deviations
@test test.statistic ≈ statistic rtol = 1e-4
end
end
end
@testset "consistency" begin
kernel = (ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel()
αs = 0.05:0.1:0.95
nsamples = 100
pvalues_consistent = Vector{Float64}(undef, 100)
pvalues_onlyone = similar(pvalues_consistent)
for nclasses in (2, 10)
# create estimator (sample test locations uniformly)
rng = StableRNG(7434)
dist = Dirichlet(nclasses, 1)
testpredictions = [rand(rng, dist) for _ in 1:5]
testtargets = rand(rng, 1:nclasses, 5)
estimator = UCME(kernel, testpredictions, testtargets)
predictions = [Vector{Float64}(undef, nclasses) for _ in 1:nsamples]
targets_consistent = Vector{Int}(undef, nsamples)
targets_onlyone = ones(Int, nsamples)
for i in eachindex(pvalues_consistent)
# sample predictions and targets
for j in 1:nsamples
rand!(rng, dist, predictions[j])
targets_consistent[j] = rand(rng, Categorical(predictions[j]))
end
# define test
test_consistent = AsymptoticCMETest(
estimator, predictions, targets_consistent
)
test_onlyone = AsymptoticCMETest(estimator, predictions, targets_onlyone)
# estimate pvalues
pvalues_consistent[i] = pvalue(test_consistent)
pvalues_onlyone[i] = pvalue(test_onlyone)
end
# compute empirical test errors
ecdf_consistent = ecdf(pvalues_consistent)
@test all(ecdf_consistent(α) < α + 0.15 for α in αs)
@test all(p < 0.05 for p in pvalues_onlyone)
end
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3389 | @testset "consistency.jl" begin
@testset "ECE" begin
ce = ECE(UniformBinning(10))
N = 1_000
for nclasses in (2, 5, 10)
println("Consistency test with ECE ($nclasses classes)")
# sample predictions and targets
rng = StableRNG(3881)
dist = Dirichlet(nclasses, 1)
predictions = [rand(rng, dist) for _ in 1:10]
targets_consistent = [
rand(rng, Categorical(prediction)) for prediction in predictions
]
targets_onlyone = ones(Int, length(predictions))
# define consistency resampling tests
test_consistent = ConsistencyTest(ce, predictions, targets_consistent)
test_onlyone = ConsistencyTest(ce, predictions, targets_onlyone)
# compute pvalues with both resampling methods
pvalues = [pvalue(test_consistent) for _ in 1:N]
if nclasses == 2
@test mean(pvalues) ≈ 0.02 atol = 1e-2
elseif nclasses == 5
@test mean(pvalues) ≈ 0.03 atol = 1e-2
elseif nclasses == 10
@test mean(pvalues) ≈ 0.03 atol = 1e-2
end
pvalues = [pvalue(test_onlyone) for _ in 1:N]
@test mean(pvalues) < 1e-3
end
end
@testset "Block SKCE" begin
nsamples = 10
N = 1_000
for blocksize in (2, 5)
ce = SKCE(
(ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel();
blocksize=blocksize,
)
for nclasses in (2, 5, 10)
println(
"Consistency test with the linear SKCE estimator ($nclasses classes)"
)
# sample predictions and targets
rng = StableRNG(8339)
dist = Dirichlet(nclasses, 1)
predictions = [rand(rng, dist) for _ in 1:10]
targets_consistent = [
rand(rng, Categorical(prediction)) for prediction in predictions
]
targets_onlyone = ones(Int, length(predictions))
# define consistency resampling tests
test_consistent = ConsistencyTest(ce, predictions, targets_consistent)
test_onlyone = ConsistencyTest(ce, predictions, targets_onlyone)
# compute pvalues with both resampling methods
pvalues = [pvalue(test_consistent) for _ in 1:N]
if blocksize == 2
if nclasses == 2
@test mean(pvalues) ≈ 0.18 atol = 1e-2
elseif nclasses == 5
@test mean(pvalues) ≈ 0.79 atol = 1e-2
elseif nclasses == 10
@test mean(pvalues) ≈ 0.32 atol = 1e-2
end
else
if nclasses == 2
@test mean(pvalues) ≈ 0.41 atol = 1e-2
elseif nclasses == 5
@test mean(pvalues) ≈ 0.35 atol = 1e-2
elseif nclasses == 10
@test mean(pvalues) ≈ 0.11 atol = 1e-2
end
end
pvalues = [pvalue(test_onlyone) for _ in 1:N]
@test mean(pvalues) < 1e-2
end
end
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 667 | using CalibrationTests
using Aqua
using CalibrationErrors
using Distributions
using StableRNGs
using StatsBase
using Random
using Statistics
using Test
Random.seed!(1234)
@testset "CalibrationTests" begin
@testset "General" begin
include("aqua.jl")
end
@testset "Binary trend" begin
include("binary_trend.jl")
end
@testset "Consistency test" begin
include("consistency.jl")
end
@testset "SKCE" begin
include("skce/asymptotic.jl")
include("skce/asymptotic_block.jl")
include("skce/distribution_free.jl")
end
@testset "Asymptotic CME" begin
include("cme.jl")
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3146 | @testset "asymptotic.jl" begin
@testset "estimate, statistic, and kernel matrix" begin
kernel = (ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel()
unbiasedskce = SKCE(kernel)
biasedskce = SKCE(kernel; unbiased=false)
for nclasses in (2, 10, 100), nsamples in (10, 50, 100)
# sample predictions and targets
dist = Dirichlet(nclasses, 1)
predictions = [rand(dist) for _ in 1:nsamples]
targets_consistent = [
rand(Categorical(prediction)) for prediction in predictions
]
targets_onlyone = ones(Int, length(predictions))
# compute calibration error estimate and test statistic
for targets in (targets_consistent, targets_onlyone)
estimate, statistic, kernelmatrix = CalibrationTests.estimate_statistic_kernelmatrix(
kernel, predictions, targets
)
@test estimate ≈ unbiasedskce(predictions, targets)
@test statistic ≈
nsamples / (nsamples - 1) * unbiasedskce(predictions, targets) -
biasedskce(predictions, targets)
@test kernelmatrix ≈
CalibrationErrors.unsafe_skce_eval.(
(kernel,),
predictions,
targets,
permutedims(predictions),
permutedims(targets),
)
end
end
end
@testset "consistency" begin
kernel1 = ExponentialKernel() ∘ ScaleTransform(0.1)
kernel2 = WhiteKernel()
kernel = kernel1 ⊗ kernel2
αs = 0.05:0.1:0.95
nsamples = 100
pvalues_consistent = Vector{Float64}(undef, 100)
pvalues_onlyone = similar(pvalues_consistent)
for nclasses in (2, 10)
rng = StableRNG(1523)
dist = Dirichlet(nclasses, 1)
predictions = [Vector{Float64}(undef, nclasses) for _ in 1:nsamples]
targets_consistent = Vector{Int}(undef, nsamples)
targets_onlyone = ones(Int, nsamples)
for i in eachindex(pvalues_consistent)
# sample predictions and targets
for j in 1:nsamples
rand!(rng, dist, predictions[j])
targets_consistent[j] = rand(rng, Categorical(predictions[j]))
end
# define test
test_consistent = AsymptoticSKCETest(
kernel, predictions, targets_consistent
)
test_onlyone = AsymptoticSKCETest(kernel, predictions, targets_onlyone)
# estimate pvalues
pvalues_consistent[i] = pvalue(test_consistent; bootstrap_iters=500)
pvalues_onlyone[i] = pvalue(test_onlyone; bootstrap_iters=500)
end
# compute empirical test errors
ecdf_consistent = ecdf(pvalues_consistent)
@test maximum(ecdf_consistent.(αs) .- αs) < 0.15
@test maximum(pvalues_onlyone) < 0.01
end
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3899 | @testset "asymptotic_block.jl" begin
@testset "estimate, stderr, and z" begin
kernel = (ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel()
for nclasses in (2, 10, 100), nsamples in (10, 50, 100)
for blocksize in (2, 5, 10, 50)
# blocksize may no be greater than number of samples
blocksize < nsamples || continue
# sample predictions and targets
dist = Dirichlet(nclasses, 1)
predictions = [rand(dist) for _ in 1:nsamples]
targets_consistent = [
rand(Categorical(prediction)) for prediction in predictions
]
targets_onlyone = ones(Int, length(predictions))
skce = SKCE(kernel; blocksize=blocksize)
# for both sets of targets
for targets in (targets_consistent, targets_onlyone)
test = AsymptoticBlockSKCETest(kernel, blocksize, predictions, targets)
@test test.blocksize == blocksize
@test test.nblocks == nsamples ÷ blocksize
@test test.estimate ≈ skce(predictions, targets)
@test test.z == test.estimate / test.stderr
@test pvalue(test) ==
pvalue(test; tail=:right) ==
ccdf(Normal(), test.z)
@test_throws ArgumentError pvalue(test; tail=:left)
@test_throws ArgumentError pvalue(test; tail=:both)
for α in 0.55:0.05:0.95
q = quantile(Normal(), α)
@test confint(test; level=α) ==
confint(test; level=α, tail=:right) ==
(max(0, test.estimate - q * test.stderr), Inf)
@test_throws ArgumentError confint(test; level=α, tail=:left)
@test_throws ArgumentError confint(test; level=α, tail=:both)
end
end
end
end
end
@testset "consistency" begin
kernel1 = ExponentialKernel() ∘ ScaleTransform(0.1)
kernel2 = WhiteKernel()
kernel = kernel1 ⊗ kernel2
αs = 0.05:0.1:0.95
nsamples = 100
pvalues_consistent = Vector{Float64}(undef, 100)
pvalues_onlyone = similar(pvalues_consistent)
for blocksize in (2, 5, 10)
for nclasses in (2, 10)
rng = StableRNG(6144)
dist = Dirichlet(nclasses, 1)
predictions = [Vector{Float64}(undef, nclasses) for _ in 1:nsamples]
targets_consistent = Vector{Int}(undef, nsamples)
targets_onlyone = ones(Int, nsamples)
for i in eachindex(pvalues_consistent)
# sample predictions and targets
for j in 1:nsamples
rand!(rng, dist, predictions[j])
targets_consistent[j] = rand(rng, Categorical(predictions[j]))
end
# define test
test_consistent = AsymptoticBlockSKCETest(
kernel, blocksize, predictions, targets_consistent
)
test_onlyone = AsymptoticBlockSKCETest(
kernel, blocksize, predictions, targets_onlyone
)
# estimate pvalues
pvalues_consistent[i] = pvalue(test_consistent)
pvalues_onlyone[i] = pvalue(test_onlyone)
end
# compute empirical test errors
ecdf_consistent = ecdf(pvalues_consistent)
@test maximum(ecdf_consistent.(αs) .- αs) < 0.1
@test maximum(pvalues_onlyone) < 0.01
end
end
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | code | 3469 | @testset "distribution_free.jl" begin
@testset "bounds" begin
# default bounds for base kernels
CalibrationTests.uniformbound(ExponentialKernel()) == 1
CalibrationTests.uniformbound(SqExponentialKernel()) == 1
CalibrationTests.uniformbound(WhiteKernel()) == 1
# default bounds for kernels with input transformations
CalibrationTests.uniformbound(SqExponentialKernel() ∘ ScaleTransform(rand())) == 1
CalibrationTests.uniformbound(ExponentialKernel() ∘ ScaleTransform(rand(10))) == 1
# default bounds for scaled kernels
CalibrationTests.uniformbound(42 * ExponentialKernel()) == 42
# default bounds for tensor product kernels
kernel = (3.2 * SqExponentialKernel()) ⊗ (2.7 * WhiteKernel())
CalibrationTests.uniformbound(kernel) == 3.2 * 2.7
# default bounds for kernel terms
CalibrationTests.uniformbound(SKCE(kernel; blocksize=2)) == 2 * 3.2 * 2.7
end
@testset "estimator and estimates" begin
kernel = (ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel()
for skce in (SKCE(kernel), SKCE(kernel; unbiased=false), SKCE(kernel; blocksize=2))
for nclasses in (2, 10, 100), nsamples in (10, 50, 100)
# sample predictions and targets
dist = Dirichlet(nclasses, 1)
predictions = [rand(dist) for _ in 1:nsamples]
targets_consistent = [
rand(Categorical(prediction)) for prediction in predictions
]
targets_onlyone = ones(Int, length(predictions))
# for both sets of targets
for targets in (targets_consistent, targets_onlyone)
test = DistributionFreeSKCETest(skce, predictions, targets)
@test test.estimator == skce
@test test.n == nsamples
@test test.estimate ≈ skce(predictions, targets)
@test test.bound == CalibrationTests.uniformbound(skce)
end
end
end
end
@testset "consistency" begin
kernel = (ExponentialKernel() ∘ ScaleTransform(0.1)) ⊗ WhiteKernel()
αs = 0.05:0.1:0.95
nsamples = 100
pvalues_consistent = Vector{Float64}(undef, 100)
for skce in (SKCE(kernel), SKCE(kernel; unbiased=false), SKCE(kernel; blocksize=2))
for nclasses in (2, 10)
rng = StableRNG(5921)
dist = Dirichlet(nclasses, 1)
predictions = [Vector{Float64}(undef, nclasses) for _ in 1:nsamples]
targets_consistent = Vector{Int}(undef, nsamples)
for i in eachindex(pvalues_consistent)
# sample predictions and targets
for j in 1:nsamples
rand!(rng, dist, predictions[j])
targets_consistent[j] = rand(rng, Categorical(predictions[j]))
end
# define test
test_consistent = DistributionFreeSKCETest(
skce, predictions, targets_consistent
)
# estimate pvalue
pvalues_consistent[i] = pvalue(test_consistent)
end
# compute empirical test errors
@test all(ecdf(pvalues_consistent).(αs) .< αs)
end
end
end
end
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.6.3 | 32b9b56182494346dffcdf3e29e2b69195cd0d07 | docs | 2954 | # CalibrationTests.jl
Hypothesis tests of calibration.
[](https://github.com/devmotion/CalibrationTests.jl/actions?query=workflow%3ACI+branch%3Amain)
[](https://zenodo.org/badge/latestdoi/215970266)
[](https://codecov.io/gh/devmotion/CalibrationTests.jl)
[](https://coveralls.io/github/devmotion/CalibrationTests.jl?branch=main)
[](https://github.com/invenia/BlueStyle)
[](https://github.com/JuliaTesting/Aqua.jl)
**There are also [Python](https://github.com/devmotion/pycalibration) and [R](https://github.com/devmotion/rcalibration) interfaces for this package**
## Overview
This package implements different hypothesis tests for calibration of
probabilistic models in the Julia language.
## Related packages
The statistical tests in this package are based on the calibration error estimators
in the package [CalibrationErrors.jl](https://github.com/devmotion/CalibrationErrors.jl).
[pycalibration](https://github.com/devmotion/pycalibration) is a Python interface for CalibrationErrors.jl and CalibrationTests.jl.
[rcalibration](https://github.com/devmotion/rcalibration) is an R interface for CalibrationErrors.jl and CalibrationTests.jl.
## Citing
If you use CalibrationsTests.jl as part of your research, teaching, or other activities, please consider citing the following publications:
Widmann, D., Lindsten, F., & Zachariah, D. (2019). [Calibration tests in multi-class
classification: A unifying framework](https://proceedings.neurips.cc/paper/2019/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html). In
*Advances in Neural Information Processing Systems 32 (NeurIPS 2019)* (pp. 12257–12267).
Widmann, D., Lindsten, F., & Zachariah, D. (2021).
[Calibration tests beyond classification](https://openreview.net/forum?id=-bxf89v3Nx).
To be presented at *ICLR 2021*.
## Acknowledgements
This work was financially supported by the Swedish Research Council via the projects *Learning of Large-Scale Probabilistic Dynamical Models* (contract number: 2016-04278), *Counterfactual Prediction Methods for Heterogeneous Populations* (contract number: 2018-05040), and *Handling Uncertainty in Machine Learning Systems* (contract number: 2020-04122), by the Swedish Foundation for Strategic Research via the project *Probabilistic Modeling and Inference for Machine Learning* (contract number: ICA16-0015), by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by ELLIIT.
| CalibrationTests | https://github.com/devmotion/CalibrationTests.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 283 | using FactoredValueMCTS
using Documenter
makedocs(;
sitename="FactoredValueMCTS.jl",
authors="Stanford Intelligent Systems Laboratory",
modules=[FactoredValueMCTS],
format=Documenter.HTML()
)
deploydocs(;
repo="github.com/JuliaPOMDP/FactoredValueMCTS.jl"
)
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 1998 | module FactoredValueMCTS
using Random
using LinearAlgebra
using POMDPs
using POMDPTools
using MultiAgentPOMDPs
using POMDPLinter: @req, @subreq, @POMDP_require
using MCTS
using Graphs
using MCTS: convert_estimator
# Patch simulate to support vector of rewards
function POMDPs.simulate(sim::RolloutSimulator, mdp::JointMDP, policy::Policy, initialstate::S) where {S}
if sim.eps === nothing
eps = 0.0
else
eps = sim.eps
end
if sim.max_steps === nothing
max_steps = typemax(Int)
else
max_steps = sim.max_steps
end
s = initialstate
# TODO: doesn't this add unnecessary action search?
r = @gen(:r)(mdp, s, action(policy, s), sim.rng)
if r isa AbstractVector
r_total = zeros(n_agents(mdp))
else
r_total = 0.0
end
sim_helper!(r_total, sim, mdp, policy, s, max_steps, eps)
return r_total
end
function sim_helper!(r_total::AbstractVector{F}, sim, mdp, policy, s, max_steps, eps) where {F}
step = 1
disc = 1.0
while disc > eps && !isterminal(mdp, s) && step <= max_steps
a = action(policy, s)
sp, r = @gen(:sp, :r)(mdp, s, a, sim.rng)
r_total .+= disc.*r
s = sp
disc *= discount(mdp)
step += 1
end
end
function sim_helper!(r_total::AbstractFloat, sim, mdp, policy, s, max_steps, eps)
step = 1
disc = 1.0
while disc > eps && !isterminal(mdp, s) && step <= max_steps
a = action(policy, s)
sp, r = @gen(:sp, :r)(mdp, s, a, sim.rng)
r_total += disc.*r
s = sp
disc *= discount(mdp)
step += 1
end
end
###
# Factored Value MCTS
#
abstract type CoordinationStatistics end
include(joinpath("fvmcts", "factoredpolicy.jl"))
include(joinpath("fvmcts", "fv_mcts_vanilla.jl"))
include(joinpath("fvmcts", "action_coordination", "varel.jl"))
include(joinpath("fvmcts", "action_coordination", "maxplus.jl"))
export
FVMCTSSolver,
MaxPlus,
VarEl
end
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 616 |
"""
Random Policy factored for each agent. Avoids exploding action space.
"""
struct FactoredRandomPolicy{RNG<:AbstractRNG,P<:JointMDP, U<:Updater} <: Policy
rng::RNG
problem::P
updater::U
end
FactoredRandomPolicy(problem::JointMDP; rng=Random.GLOBAL_RNG, updater=NothingUpdater()) = FactoredRandomPolicy(rng, problem, updater)
function POMDPs.action(policy::FactoredRandomPolicy, s)
return [rand(policy.rng, agent_actions(policy.problem, i, si)) for (i, si) in enumerate(s)]
end
POMDPs.solve(solver::RandomSolver, problem::JointMDP) = FactoredRandomPolicy(solver.rng, problem, NothingUpdater()) | FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 14824 | using StaticArrays
using Base.Threads: @spawn
abstract type AbstractCoordinationStrategy end
struct VarEl <: AbstractCoordinationStrategy
end
Base.@kwdef struct MaxPlus <:AbstractCoordinationStrategy
message_iters::Int64 = 10
message_norm::Bool = true
use_agent_utils::Bool = false
node_exploration::Bool = true
edge_exploration::Bool = true
end
"""
Factored Value Monte Carlo Tree Search solver datastructure
Fields:
n_iterations::Int64
Number of iterations during each action() call.
default: 100
max_time::Float64
Maximum CPU time to spend computing an action.
default::Inf
depth::Int64
Number of iterations during each action() call.
default: 100
exploration_constant::Float64:
Specifies how much the solver should explore. In the UCB equation, Q + c*sqrt(log(t/N)), c is the exploration constant.
The exploration terms for FV-MCTS-Var-El and FV-MCTS-Max-Plus are different but the role of c is the same.
default: 1.0
rng::AbstractRNG:
Random number generator
estimate_value::Any (rollout policy)
Function, object, or number used to estimate the value at the leaf nodes.
If this is a function `f`, `f(mdp, s, depth)` will be called to estimate the value.
If this is an object `o`, `estimate_value(o, mdp, s, depth)` will be called.
If this is a number, the value will be set to that number
default: RolloutEstimator(RandomSolver(rng))
init_Q::Any
Function, object, or number used to set the initial Q(s,a) value at a new node.
If this is a function `f`, `f(mdp, s, a)` will be called to set the value.
If this is an object `o`, `init_Q(o, mdp, s, a)` will be called.
If this is a number, Q will be set to that number
default: 0.0
init_N::Any
Function, object, or number used to set the initial N(s,a) value at a new node.
If this is a function `f`, `f(mdp, s, a)` will be called to set the value.
If this is an object `o`, `init_N(o, mdp, s, a)` will be called.
If this is a number, N will be set to that number
default: 0
reuse_tree::Bool
If this is true, the tree information is re-used for calculating the next plan.
Of course, clear_tree! can always be called to override this.
default: false
coordination_strategy::AbstractCoordinationStrategy
The specific strategy with which to compute the best joint action from the current MCTS statistics.
default: VarEl()
"""
Base.@kwdef mutable struct FVMCTSSolver <: AbstractMCTSSolver
n_iterations::Int64 = 100
max_time::Float64 = Inf
depth::Int64 = 10
exploration_constant::Float64 = 1.0
rng::AbstractRNG = Random.GLOBAL_RNG
estimate_value::Any = RolloutEstimator(RandomSolver(rng))
init_Q::Any = 0.0
init_N::Any = 0
reuse_tree::Bool = false
coordination_strategy::AbstractCoordinationStrategy = VarEl()
end
mutable struct FVMCTSTree{S,A,CS<:CoordinationStatistics}
# To map the multi-agent state vector to the ID of the node in the tree
state_map::Dict{S,Int64}
# The next two vectors have one for each node ID in the tree
total_n::Vector{Int} # The number of times the node has been tried
s_labels::Vector{S} # The state corresponding to the node ID
# List of all individual actions of each agent for coordination purposes.
all_agent_actions::Vector{A}
coordination_stats::CS
lock::ReentrantLock
end
function FVMCTSTree(all_agent_actions::Vector{A},
coordination_stats::CS,
init_state::S,
lock::ReentrantLock,
sz::Int64=10000) where {S, A, CS <: CoordinationStatistics}
return FVMCTSTree{S,A,CS}(Dict{S,Int64}(),
sizehint!(Int[], sz),
sizehint!(S[], sz),
all_agent_actions,
coordination_stats,
lock
)
end # function
Base.isempty(t::FVMCTSTree) = isempty(t.state_map)
state_nodes(t::FVMCTSTree) = (FVStateNode(t, id) for id in 1:length(t.total_n))
struct FVStateNode{S}
tree::FVMCTSTree{S}
id::Int64
end
# Accessors for state nodes
@inline state(n::FVStateNode) = n.tree.s_labels[n.id]
@inline total_n(n::FVStateNode) = n.tree.total_n[n.id]
## No need for `children` or ActionNode just yet
mutable struct FVMCTSPlanner{S, A, SE, CS <: CoordinationStatistics, RNG <: AbstractRNG} <: AbstractMCTSPlanner{JointMDP{S,A}}
solver::FVMCTSSolver
mdp::JointMDP{S,A}
tree::FVMCTSTree{S,A,CS}
solved_estimate::SE
rng::RNG
end
"""
Called internally in solve() to create the FVMCTSPlanner where Var-El is the specific action coordination strategy.
Creates VarElStatistics internally with the CG components and the minimum degree ordering heuristic.
"""
function varel_joint_mcts_planner(solver::FVMCTSSolver,
mdp::JointMDP{S,A},
init_state::S,
) where {S,A}
# Get coordination graph components from maximal cliques
#adjmat = coord_graph_adj_mat(mdp)
#@assert size(adjmat)[1] == n_agents(mdp) "Adjacency Matrix does not match number of agents!"
#adjmatgraph = SimpleGraph(adjmat)
adjmatgraph = coordination_graph(mdp)
coord_graph_components = maximal_cliques(adjmatgraph)
min_degree_ordering = sortperm(degree(adjmatgraph))
# Initialize full agent actions
all_agent_actions = Vector{(actiontype(mdp))}(undef, n_agents(mdp))
for i = 1:n_agents(mdp)
all_agent_actions[i] = agent_actions(mdp, i)
end
ve_stats = VarElStatistics{S}(coord_graph_components, min_degree_ordering,
Dict{typeof(init_state),Vector{Vector{Int64}}}(),
Dict{typeof(init_state),Vector{Vector{Int64}}}(),
)
# Create tree from the current state
tree = FVMCTSTree(all_agent_actions, ve_stats,
init_state, ReentrantLock(), solver.n_iterations)
se = convert_estimator(solver.estimate_value, solver, mdp)
return FVMCTSPlanner(solver, mdp, tree, se, solver.rng)
end # end function
"""
Called internally in solve() to create the FVMCTSPlanner where Max-Plus is the specific action coordination strategy.
Creates MaxPlusStatistics and assumes the various MP flags are sent down from the CoordinationStrategy object given to the solver.
"""
function maxplus_joint_mcts_planner(solver::FVMCTSSolver,
mdp::JointMDP{S,A},
init_state::S,
message_iters::Int64,
message_norm::Bool,
use_agent_utils::Bool,
node_exploration::Bool,
edge_exploration::Bool,
) where {S,A}
@assert (node_exploration || edge_exploration) "At least one of nodes or edges should explore!"
#= adjmat = coord_graph_adj_mat(mdp)
@assert size(adjmat)[1] == n_agents(mdp) "Adjacency Mat does not match number of agents!" =#
#adjmatgraph = SimpleGraph(adjmat)
adjmatgraph = coordination_graph(mdp)
@assert size(adjacency_matrix(adjmatgraph))[1] == n_agents(mdp)
# Initialize full agent actions
# TODO(jkg): this is incorrect? Or we need to override actiontype to refer to agent actions?
all_agent_actions = Vector{(actiontype(mdp))}(undef, n_agents(mdp))
for i = 1:n_agents(mdp)
all_agent_actions[i] = agent_actions(mdp, i)
end
mp_stats = MaxPlusStatistics{S}(adjmatgraph,
message_iters,
message_norm,
use_agent_utils,
node_exploration,
edge_exploration,
Dict{S,PerStateMPStats}())
# Create tree from the current state
tree = FVMCTSTree(all_agent_actions, mp_stats,
init_state, ReentrantLock(), solver.n_iterations)
se = convert_estimator(solver.estimate_value, solver, mdp)
return FVMCTSPlanner(solver, mdp, tree, se, solver.rng)
end
# Reset tree.
function clear_tree!(planner::FVMCTSPlanner)
# Clear out state map dict entirely
empty!(planner.tree.state_map)
# Empty state vectors with state hints
sz = min(planner.solver.n_iterations, 100_000)
empty!(planner.tree.s_labels)
sizehint!(planner.tree.s_labels, planner.solver.n_iterations)
# Don't touch all_agent_actions and coord graph component
# Just clear comp stats dict
clear_statistics!(planner.tree.coordination_stats)
end
MCTS.init_Q(n::Number, mdp::JointMDP, s, c, a) = convert(Float64, n)
MCTS.init_N(n::Number, mdp::JointMDP, s, c, a) = convert(Int, n)
# No computation is done in solve; the solver is just given the mdp model that it will work with
# and in case of MaxPlus, the various flags for the MaxPlus behavior
function POMDPs.solve(solver::FVMCTSSolver, mdp::JointMDP)
if typeof(solver.coordination_strategy) == VarEl
return varel_joint_mcts_planner(solver, mdp, rand(solver.rng, initialstate(mdp)))
elseif typeof(solver.coordination_strategy) == MaxPlus
return maxplus_joint_mcts_planner(solver, mdp, rand(solver.rng, initialstate(mdp)),
solver.coordination_strategy.message_iters,
solver.coordination_strategy.message_norm,
solver.coordination_strategy.use_agent_utils,
solver.coordination_strategy.node_exploration,
solver.coordination_strategy.edge_exploration)
else
throw(error("Not Implemented"))
end
end
# IMP: Overriding action for FVMCTSPlanner here
# NOTE: Hardcoding no tree reuse for now
function POMDPs.action(planner::FVMCTSPlanner, s)
clear_tree!(planner) # Always call this at the top
plan!(planner, s)
action = coordinate_action(planner.mdp, planner.tree, s)
return action
end
function POMDPTools.action_info(planner::FVMCTSPlanner, s)
clear_tree!(planner) # Always call this at the top
plan!(planner, s)
action = coordinate_action(planner.mdp, planner.tree, s)
return action, nothing
end
function plan!(planner::FVMCTSPlanner, s)
planner.tree = build_tree(planner, s)
end
# build_tree can be called on the assumption that no reuse AND tree is reinitialized
function build_tree(planner::FVMCTSPlanner, s::S) where S
n_iterations = planner.solver.n_iterations
depth = planner.solver.depth
root = insert_node!(planner.tree, planner, s)
# Simulate can be multi-threaded
@sync for n = 1:n_iterations
@spawn simulate(planner, root, depth)
end
return planner.tree
end
function simulate(planner::FVMCTSPlanner, node::FVStateNode, depth::Int64)
mdp = planner.mdp
rng = planner.rng
s = state(node)
tree = node.tree
# once depth is zero return
if isterminal(planner.mdp, s)
return 0.0
elseif depth == 0
return estimate_value(planner.solved_estimate, planner.mdp, s, depth)
end
# Choose best UCB action (NOT an action node as in vanilla MCTS)
ucb_action = coordinate_action(mdp, planner.tree, s, planner.solver.exploration_constant, node.id)
# Monte Carlo Transition
sp, r = @gen(:sp, :r)(mdp, s, ucb_action, rng)
spid = lock(tree.lock) do
get(tree.state_map, sp, 0) # may be non-zero even with no tree reuse
end
if spid == 0
spn = insert_node!(tree, planner, sp)
spid = spn.id
q = r .+ discount(mdp) * estimate_value(planner.solved_estimate, planner.mdp, sp, depth - 1)
else
q = r .+ discount(mdp) * simulate(planner, FVStateNode(tree, spid) , depth - 1)
end
# NOTE: Not bothering with tree visualization right now
# Augment N(s)
lock(tree.lock) do
tree.total_n[node.id] += 1
end
# Update component statistics! (non-trivial)
# This is related but distinct from initialization
update_statistics!(mdp, tree, s, ucb_action, q)
return q
end
@POMDP_require simulate(planner::FVMCTSPlanner, s, depth::Int64) begin
mdp = planner.mdp
P = typeof(mdp)
@assert P <: JointMDP
SV = statetype(P)
@req iterate(::SV)
#@assert typeof(SV) <: AbstractVector
AV = actiontype(P)
@assert typeof(AV) <: AbstractVector
@req discount(::P)
@req isterminal(::P, ::SV)
@subreq insert_node!(planner.tree, planner, s)
@subreq estimate_value(planner.solved_estimate, mdp, s, depth)
@req gen(::P, ::SV, ::AV, ::typeof(planner.rng)) # XXX this is not exactly right - it could be satisfied with transition
## Requirements from MMDP Model
@req agent_actions(::P, ::Int64)
@req agent_actions(::P, ::Int64, ::eltype(SV))
@req n_agents(::P)
@req coordination_graph(::P)
# TODO: Should we also have this requirement for SV?
@req isequal(::S, ::S)
@req hash(::S)
end
function insert_node!(tree::FVMCTSTree{S,A,CS}, planner::FVMCTSPlanner,
s::S) where {S,A,CS <: CoordinationStatistics}
lock(tree.lock) do
push!(tree.s_labels, s)
tree.state_map[s] = length(tree.s_labels)
push!(tree.total_n, 1)
# NOTE: Could actually make actions state-dependent if need be
init_statistics!(tree, planner, s)
end
# length(tree.s_labels) is just an alias for the number of state nodes
ls = lock(tree.lock) do
length(tree.s_labels)
end
return FVStateNode(tree, ls)
end
@POMDP_require insert_node!(tree::FVMCTSTree, planner::FVMCTSPlanner, s) begin
P = typeof(planner.mdp)
AV = actiontype(P)
A = eltype(AV)
SV = typeof(s)
#S = eltype(SV)
# TODO: Review IQ and IN
IQ = typeof(planner.solver.init_Q)
if !(IQ <: Number) && !(IQ <: Function)
@req init_Q(::IQ, ::P, ::SV, ::Vector{Int64}, ::AbstractVector{A})
end
IN = typeof(planner.solver.init_N)
if !(IN <: Number) && !(IN <: Function)
@req init_N(::IQ, ::P, ::SV, ::Vector{Int64}, ::AbstractVector{A})
end
@req isequal(::S, ::S)
@req hash(::S)
end
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 12891 | # NOTE: Matrix implicitly assumes all agents have same number of actions
mutable struct PerStateMPStats
agent_action_n::Matrix{Int64} # N X A
agent_action_q::Matrix{Float64}
edge_action_n::Matrix{Int64} # |E| X A^2
edge_action_q::Matrix{Float64}
end
"""
Tracks the specific informations and statistics we need to use Max-Plus to coordinate_action
the joint action in Factored-Value MCTS. Putting parameters here is a little ugly but coordinate_action can't have them since VarEl doesn't use those args.
Fields:
adjmatgraph::SimpleGraph
The coordination graph as a Graphs SimpleGraph.
message_iters::Int64
Number of rounds of message passing.
message_norm::Bool
Whether to normalize the messages or not after message passing.
use_agent_utils::Bool
Whether to include the per-agent utilities while computing the best agent action (see our paper for details)
node_exploration::Bool
Whether to use the per-node UCB style bonus while computing the best agent action (see our paper for details)
edge_exploration::Bool
Whether to use the per-edge UCB style bonus after the message passing rounds (see our paper for details). One of this or node_exploration MUST be true for exploration.
all_states_stats::Dict{AbstractVector{S},PerStateMPStats}
Maps each joint state in the tree to the per-state statistics.
"""
mutable struct MaxPlusStatistics{S} <: CoordinationStatistics
adjmatgraph::SimpleGraph
message_iters::Int64
message_norm::Bool
use_agent_utils::Bool
node_exploration::Bool
edge_exploration::Bool # NOTE: One of this or node exploration must be true
all_states_stats::Dict{S,PerStateMPStats}
end
function clear_statistics!(mp_stats::MaxPlusStatistics)
empty!(mp_stats.all_states_stats)
end
function update_statistics!(mdp::JointMDP{S,A}, tree::FVMCTSTree{S,A,MaxPlusStatistics{S}},
s::S, ucb_action::A, q::AbstractFloat) where {S,A}
update_statistics!(mdp, tree, s, ucb_action, ones(typeof(q), n_agents(mdp)) * q)
end
"""
Take the q-value from the MCTS step and distribute the updates across the per-node and per-edge q-stats as per the formula in our paper.
"""
function update_statistics!(mdp::JointMDP{S,A}, tree::FVMCTSTree{S,A,MaxPlusStatistics{S}},
s::S, ucb_action::A, q::AbstractVector{Float64}) where {S,A}
state_stats = tree.coordination_stats.all_states_stats[s]
nagents = n_agents(mdp)
# Update per agent action stats
for i = 1:nagents
ac_idx = agent_actionindex(mdp, i, ucb_action[i])
lock(tree.lock) do
state_stats.agent_action_n[i, ac_idx] += 1
state_stats.agent_action_q[i, ac_idx] +=
(q[i] - state_stats.agent_action_q[i, ac_idx]) / state_stats.agent_action_n[i, ac_idx]
end
end
# Now update per-edge action stats
for (idx, e) in enumerate(edges(tree.coordination_stats.adjmatgraph))
# NOTE: Need to be careful about action ordering
# Being more general to have unequal agent actions
edge_comp = (e.src,e.dst)
edge_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in edge_comp)
edge_ac_idx = LinearIndices(edge_tup)[agent_actionindex(mdp, e.src, ucb_action[e.src]),
agent_actionindex(mdp, e.dst, ucb_action[e.dst])]
q_edge_value = q[e.src] + q[e.dst]
lock(tree.lock) do
state_stats.edge_action_n[idx, edge_ac_idx] += 1
state_stats.edge_action_q[idx, edge_ac_idx] +=
(q_edge_value - state_stats.edge_action_q[idx, edge_ac_idx]) / state_stats.edge_action_n[idx, edge_ac_idx]
end
end
lock(tree.lock) do
tree.coordination_stats.all_states_stats[s] = state_stats
end
end
function init_statistics!(tree::FVMCTSTree{S,A,MaxPlusStatistics{S}}, planner::FVMCTSPlanner,
s::S) where {S,A}
n_agents = length(s)
# NOTE: Assuming all agents have the same actions here
n_all_actions = length(tree.all_agent_actions[1])
agent_action_n = zeros(Int64, n_agents, n_all_actions)
agent_action_q = zeros(Float64, n_agents, n_all_actions)
# Loop over agents and then actions
# TODO: Need to define init_N and init_Q for single agent
for i = 1:n_agents
for (j, ac) in enumerate(tree.all_agent_actions[i])
agent_action_n[i, j] = init_N(planner.solver.init_N, planner.mdp, s, i, ac)
agent_action_q[i, j] = init_Q(planner.solver.init_Q, planner.mdp, s, i, ac)
end
end
n_edges = ne(tree.coordination_stats.adjmatgraph)
edge_action_n = zeros(Int64, n_edges, n_all_actions^2)
edge_action_q = zeros(Float64, n_edges, n_all_actions^2)
# Loop over edges and then action_i \times action_j
for (idx, e) in enumerate(edges(tree.coordination_stats.adjmatgraph))
edge_comp = (e.src, e.dst)
n_edge_actions = prod([length(tree.all_agent_actions[c]) for c in edge_comp])
edge_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in edge_comp)
for edge_ac_idx = 1:n_edge_actions
ct_idx = CartesianIndices(edge_tup)[edge_ac_idx]
edge_action = [tree.all_agent_actions[c] for c in edge_comp]
edge_action_n[idx, edge_ac_idx] = init_N(planner.solver.init_N, planner.mdp, s, edge_comp, edge_action)
edge_action_q[idx, edge_ac_idx] = init_Q(planner.solver.init_Q, planner.mdp, s, edge_comp, edge_action)
end
end
state_stats = PerStateMPStats(agent_action_n, agent_action_q, edge_action_n, edge_action_q)
lock(tree.lock) do
tree.coordination_stats.all_states_stats[s] = state_stats
end
end
"""
Runs Max-Plus at the current state using the per-state MaxPlusStatistics to compute the best joint action with either or both of node-wise and edge-wise exploration bonus. Rounds of message passing are followed by per-node maximization.
"""
function coordinate_action(mdp::JointMDP{S,A}, tree::FVMCTSTree{S,A,MaxPlusStatistics{S}}, s::S,
exploration_constant::Float64=0.0, node_id::Int64=0) where {S,A}
state_stats = lock(tree.lock) do
tree.coordination_stats.all_states_stats[s]
end
adjgraphmat = lock(tree.lock) do
tree.coordination_stats.adjmatgraph
end
k = tree.coordination_stats.message_iters
message_norm = tree.coordination_stats.message_norm
n_agents = length(s)
state_agent_actions = [agent_actions(mdp, i, si) for (i, si) in enumerate(s)]
n_all_actions = length(tree.all_agent_actions[1])
n_edges = ne(tree.coordination_stats.adjmatgraph)
# Init forward and backward messages and q0
fwd_messages = zeros(Float64, n_edges, n_all_actions)
bwd_messages = zeros(Float64, n_edges, n_all_actions)
if tree.coordination_stats.use_agent_utils
q_values = state_stats.agent_action_q / n_agents
else
q_values = zeros(size(state_stats.agent_action_q))
end
state_total_n = lock(tree.lock) do
(node_id > 0) ? tree.total_n[node_id] : 1
end
# Iterate over passes
for t = 1:k
fnormdiff, bnormdiff = perform_message_passing!(fwd_messages, bwd_messages, mdp, tree.all_agent_actions,
adjgraphmat, state_agent_actions, n_edges, q_values, message_norm,
0, state_stats, state_total_n)
if !tree.coordination_stats.use_agent_utils
q_values = zeros(size(state_stats.agent_action_q))
end
# Update Q value with messages
for i = 1:n_agents
# need indices of all edges that agent is involved in
nbrs = neighbors(tree.coordination_stats.adjmatgraph, i)
edgelist = collect(edges(adjgraphmat))
if tree.coordination_stats.use_agent_utils
@views q_values[i, :] = state_stats.agent_action_q[i, :]/n_agents
end
for n in nbrs
if Edge(i,n) in edgelist # use backward message
q_values[i,:] += bwd_messages[findfirst(isequal(Edge(i,n)), edgelist), :]
elseif Edge(n,i) in edgelist
q_values[i,:] += fwd_messages[findfirst(isequal(Edge(n,i)), edgelist), :]
else
@warn "Neither edge found!"
end
end
end
# If converged, break
if isapprox(fnormdiff, 0.0) && isapprox(bnormdiff, 0.0)
break
end
end # for t = 1:k
# If edge exploration flag enabled, do a final exploration bonus
if tree.coordination_stats.edge_exploration
perform_message_passing!(fwd_messages, bwd_messages, mdp, tree.all_agent_actions,
adjgraphmat, state_agent_actions, n_edges, q_values, message_norm,
exploration_constant, state_stats, state_total_n)
end # if edge_exploration
# Maximize q values for agents
best_action = Vector{eltype(A)}(undef, n_agents)
for i = 1:n_agents
# NOTE: Again can't just iterate over agent actions as it may be a subset
exp_q_values = zeros(length(state_agent_actions[i]))
if tree.coordination_stats.node_exploration
for (idx, ai) in enumerate(state_agent_actions[i])
ai_idx = agent_actionindex(mdp, i, ai)
exp_q_values[idx] = q_values[i, ai_idx] + exploration_constant*sqrt((log(state_total_n + 1.0))/(state_stats.agent_action_n[i, ai_idx] + 1.0))
end
else
for (idx, ai) in enumerate(state_agent_actions[i])
ai_idx = agent_actionindex(mdp, i, ai)
exp_q_values[idx] = q_values[i, ai_idx]
end
end
# NOTE: Can now look up index in exp_q_values and then again look at state_agent_actions
_, idx = findmax(exp_q_values)
best_action[i] = state_agent_actions[i][idx]
end
return best_action
end
function perform_message_passing!(fwd_messages::AbstractArray{F,2}, bwd_messages::AbstractArray{F,2},
mdp, all_agent_actions,
adjgraphmat, state_agent_actions, n_edges::Int, q_values, message_norm,
exploration_constant, state_stats, state_total_n) where {F}
# Iterate over edges
fwd_messages_old = deepcopy(fwd_messages)
bwd_messages_old = deepcopy(bwd_messages)
for (e_idx, e) in enumerate(edges(adjgraphmat))
i = e.src
j = e.dst
edge_tup_indices = LinearIndices(Tuple(1:length(all_agent_actions[c]) for c in (i,j)))
# forward: maximize sender
# NOTE: Can't do enumerate as action set might be smaller
# Need to look up global index of agent action and use that
# Need to break up vectorized loop
@inbounds for aj in state_agent_actions[j]
aj_idx = agent_actionindex(mdp, j, aj)
fwd_message_vals = zeros(length(state_agent_actions[i]))
# TODO: Should we use inbounds here again?
@inbounds for (idx, ai) in enumerate(state_agent_actions[i])
ai_idx = agent_actionindex(mdp, i, ai)
fwd_message_vals[idx] = q_values[i, ai_idx] - bwd_messages_old[e_idx, ai_idx] + state_stats.edge_action_q[e_idx, edge_tup_indices[ai_idx, aj_idx]]/n_edges + exploration_constant * sqrt( (log(state_total_n + 1.0)) / (state_stats.edge_action_n[e_idx, edge_tup_indices[ai_idx, aj_idx]] + 1) )
end
fwd_messages[e_idx, aj_idx] = maximum(fwd_message_vals)
end
@inbounds for ai in state_agent_actions[i]
ai_idx = agent_actionindex(mdp, i, ai)
bwd_message_vals = zeros(length(state_agent_actions[j]))
@inbounds for (idx, aj) in enumerate(state_agent_actions[j])
aj_idx = agent_actionindex(mdp, j, aj)
bwd_message_vals[idx] = q_values[j, aj_idx] - fwd_messages_old[e_idx, aj_idx] + state_stats.edge_action_q[e_idx, edge_tup_indices[ai_idx, aj_idx]]/n_edges + exploration_constant * sqrt( (log(state_total_n + 1.0))/ (state_stats.edge_action_n[e_idx, edge_tup_indices[ai_idx, aj_idx]] + 1) )
end
bwd_messages[e_idx, ai_idx] = maximum(bwd_message_vals)
end
# Normalize messages for better convergence
if message_norm
@views fwd_messages[e_idx, :] .-= sum(fwd_messages[e_idx, :])/length(fwd_messages[e_idx, :])
@views bwd_messages[e_idx, :] .-= sum(bwd_messages[e_idx, :])/length(bwd_messages[e_idx, :])
end
end # (idx,edges) in enumerate(edges)
# Return norm of message difference
return norm(fwd_messages - fwd_messages_old), norm(bwd_messages - bwd_messages_old)
end
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 15514 | """
Tracks the specific informations and statistics we need to use Var-El to coordinate_action
the joint action in Factored-Value MCTS.
Fields:
coord_graph_components::Vector{Vector{Int64}}
The list of coordination graph components, i.e., cliques, where each element is a list of agent IDs that are in a mutual clique.
min_degree_ordering::Vector{Int64}
Ordering of agent IDs in increasing CG degree. This ordering is the heuristic most typically used for the elimination order in Var-El.
n_component_stats::Dict{AbstractVector{S},Vector{Vector{Int64}}}
Maps each joint state in the tree (for which we need to compute the UCB action) to the frequency of each component's various local actions.
q_component_stats::Dict{AbstractVector{S},Vector{Vector{Float64}}}
Maps each joint state in the tree to the accumulated q-value of each component's various local actions.
"""
mutable struct VarElStatistics{S} <: CoordinationStatistics
coord_graph_components::Vector{Vector{Int64}}
min_degree_ordering::Vector{Int64}
n_component_stats::Dict{S,Vector{Vector{Int64}}}
q_component_stats::Dict{S,Vector{Vector{Float64}}}
end
function clear_statistics!(ve_stats::VarElStatistics)
empty!(ve_stats.n_component_stats)
empty!(ve_stats.q_component_stats)
end
"""
Runs variable elimination at the current state using the VarEl Statistics to compute the best joint action with the component-wise exploration bonus.
FYI: Rather complicated.
"""
function coordinate_action(mdp::JointMDP{S,A}, tree::FVMCTSTree{S,A,VarElStatistics{S}}, s::S,
exploration_constant::Float64=0.0, node_id::Int64=0) where {S,A}
n_agents = length(s)
best_action_idxs = MVector{n_agents}([-1 for i in 1:n_agents])
# !Note: Acquire lock so as to avoid race
state_q_stats = lock(tree.lock) do
tree.coordination_stats.q_component_stats[s]
end
state_n_stats = lock(tree.lock) do
tree.coordination_stats.n_component_stats[s]
end
state_total_n = lock(tree.lock) do
(node_id > 0) ? tree.total_n[node_id] : 0
end
# Maintain set of potential functions
# NOTE: Hashing a vector here
potential_fns = Dict{Vector{Int64},Vector{Float64}}()
for (comp, q_stats) in zip(tree.coordination_stats.coord_graph_components, state_q_stats)
potential_fns[comp] = q_stats
end
# Need this for reverse process
# Maps agent to other elements in best response functions and corresponding set of actions
# E.g. Agent 2 -> (3,4) in its best response and corresponding vector of agent 2 best actions
best_response_fns = Dict{Int64,Tuple{Vector{Int64},Vector{Int64}}}()
state_dep_actions = [agent_actions(mdp, i, si) for (i, si) in enumerate(s)]
# Iterate over variable ordering
# Need to maintain intermediate tables
for ag_idx in tree.coordination_stats.min_degree_ordering
# Lookup factors with agent in them and simultaneously construct
# members of new potential function, and delete old factors
agent_factors = Vector{Vector{Int64}}(undef, 0)
new_potential_members = Vector{Int64}(undef, 0)
for k in collect(keys(potential_fns))
if ag_idx in k
# Agent to-be-eliminated is in factor
push!(agent_factors, k)
# Construct key for new potential as union of all others except ag_idx
for ag in k
if ag != ag_idx && ~(ag in new_potential_members)
push!(new_potential_members, ag)
end
end
end
end
if isempty(new_potential_members) == true
# No out neighbors..either at beginning or end of ordering
@assert agent_factors == [[ag_idx]] "agent_factors $(agent_factors) is not just [ag_idx] $([ag_idx])!"
best_action_idxs[ag_idx] = _best_actionindex_empty(potential_fns,
state_dep_actions,
tree.all_agent_actions,
ag_idx)
else
# Generate new potential function and the best response vector for eliminated agent
n_comp_actions = prod([length(tree.all_agent_actions[c]) for c in new_potential_members])
# NOTE: Tuples should ALWAYS use tree.all_agent_actions for indexing
comp_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in new_potential_members)
# Initialize q-stats for new potential and best response action vector
# will be inserted into corresponding dictionaries at the end
new_potential_stats = Vector{Float64}(undef, n_comp_actions)
best_response_vect = Vector{Int64}(undef, n_comp_actions)
# Iterate over new potential joint actions and compute new payoff and best response
for comp_ac_idx = 1:n_comp_actions
# Get joint action for other members in potential
ct_idx = CartesianIndices(comp_tup)[comp_ac_idx]
# For maximizing over agent actions
# As before, we now need to init with -Inf
ag_ac_values = zeros(length(tree.all_agent_actions[ag_idx]))
# TODO: Agent actions should already be in order
# Only do anything if action legal
for (ag_ac_idx, ag_ac) in enumerate(tree.all_agent_actions[ag_idx])
if ag_ac in state_dep_actions[ag_idx]
# Need to look up corresponding stats from agent_factors
for factor in agent_factors
# NOTE: Need to reconcile the ORDER of ag_idx in factor
factor_action_idxs = MVector{length(factor),Int64}(undef)
for (idx, f) in enumerate(factor)
# if f is ag_idx, set corresponding factor action to ag_ac
if f == ag_idx
factor_action_idxs[idx] = ag_ac_idx
else
# Lookup index for corresp. agent action in ct_idx
new_pot_idx = findfirst(isequal(f), new_potential_members)
factor_action_idxs[idx] = ct_idx[new_pot_idx]
end # f == ag_idx
end
# NOW we can look up the stats of the factor
factor_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in factor)
factor_action_linidx = LinearIndices(factor_tup)[factor_action_idxs...]
ag_ac_values[ag_ac_idx] += potential_fns[factor][factor_action_linidx]
# Additionally add exploration stats if factor in original set
factor_comp_idx = findfirst(isequal(factor), tree.coordination_stats.coord_graph_components)
if state_total_n > 0 && ~(isnothing(factor_comp_idx)) # NOTE: Julia1.1
ag_ac_values[ag_ac_idx] += exploration_constant * sqrt((log(state_total_n+1.0))/(state_n_stats[factor_comp_idx][factor_action_linidx]+1.0))
end
end # factor in agent_factors
else
ag_ac_values[ag_ac_idx] = -Inf
end # ag_ac in state_dep_actions
end # ag_ac_idx = 1:length(tree.all_agent_actions[ag_idx])
# Now we lookup ag_ac_values for the best value to be put in new_potential_stats
# and the best index to be put in best_response_vect
# NOTE: The -Inf mask should ensure only legal idxs chosen
# If all ag_ac_values equal, should we sample randomly?
best_val, best_idx = findmax(ag_ac_values)
new_potential_stats[comp_ac_idx] = best_val
best_response_vect[comp_ac_idx] = best_idx
end # comp_ac_idx in n_comp_actions
# Finally, we enter new stats vector and best response vector back to dicts
potential_fns[new_potential_members] = new_potential_stats
best_response_fns[ag_idx] = (new_potential_members, best_response_vect)
end # isempty(new_potential_members)
# Delete keys in agent_factors from potential fns since variable has been eliminated
for factor in agent_factors
delete!(potential_fns, factor)
end
end # ag_idx in min_deg_ordering
# NOTE: At this point, best_action_idxs has at least one entry...for the last action obtained
@assert !all(isequal(-1), best_action_idxs) "best_action_idxs is still undefined!"
# Message passing in reverse order to recover best action
for ag_idx in Base.Iterators.reverse(tree.coordination_stats.min_degree_ordering)
# Only do something if best action already not obtained
if best_action_idxs[ag_idx] == -1
# Should just be able to lookup best response function
(agents, best_response_vect) = best_response_fns[ag_idx]
# Members of agents should already have their best action defined
agent_ac_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in agents)
best_agents_action_idxs = [best_action_idxs[ag] for ag in agents]
best_response_idx = LinearIndices(agent_ac_tup)[best_agents_action_idxs...]
# Assign best action for ag_idx
best_action_idxs[ag_idx] = best_response_vect[best_response_idx]
end # isdefined
end
# Finally, return best action by iterating over best action indices
# NOTE: best_action should use state-dep actions to reverse index
best_action = [tree.all_agent_actions[ag][idx] for (ag, idx) in enumerate(best_action_idxs)]
return best_action
end
"""
Take the q-value from the MCTS step and distribute the updates across the component q-stats as per the formula in the Amato-Oliehoek paper.
"""
function update_statistics!(mdp::JointMDP{S,A}, tree::FVMCTSTree{S,A,VarElStatistics{S}},
s::S, ucb_action::A, q::AbstractVector{Float64}) where {S,A}
n_agents = length(s)
for (idx, comp) in enumerate(tree.coordination_stats.coord_graph_components)
# Create cartesian index tuple
comp_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in comp)
# RECOVER local action corresp. to ucb action
# TODO: Review this carefully. Need @req for action index for agent.
local_action = [ucb_action[c] for c in comp]
local_action_idxs = [agent_actionindex(mdp, c, a) for (a, c) in zip(local_action, comp)]
comp_ac_idx = LinearIndices(comp_tup)[local_action_idxs...]
# NOTE: NOW we can update stats. Could generalize incremental update more here
lock(tree.lock) do
tree.coordination_stats.n_component_stats[s][idx][comp_ac_idx] += 1
q_comp_value = sum(q[c] for c in comp)
tree.coordination_stats.q_component_stats[s][idx][comp_ac_idx] +=
(q_comp_value - tree.coordination_stats.q_component_stats[s][idx][comp_ac_idx]) / tree.coordination_stats.n_component_stats[s][idx][comp_ac_idx]
end
end
end
# TODO: is this the correct thing to do?
# My guess is no, but not sure.
function update_statistics!(mdp::JointMDP{S,A}, tree::FVMCTSTree{S,A,VarElStatistics{S}},
s::S, ucb_action::A, q::Float64) where {S, A}
n_agents = length(s)
for (idx, comp) in enumerate(tree.coordination_stats.coord_graph_components)
# Create cartesian index tuple
comp_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in comp)
# RECOVER local action corresp. to ucb action
# TODO: Review this carefully. Need @req for action index for agent.
local_action = [ucb_action[c] for c in comp]
local_action_idxs = [agent_actionindex(mdp, c, a) for (a, c) in zip(local_action, comp)]
comp_ac_idx = LinearIndices(comp_tup)[local_action_idxs...]
# NOTE: NOW we can update stats. Could generalize incremental update more here
lock(tree.lock) do
tree.coordination_stats.n_component_stats[s][idx][comp_ac_idx] += 1
q_comp_value = q * length(comp) # Maintains equivalence with `sum(q[c] for c in comp)`
tree.coordination_stats.q_component_stats[s][idx][comp_ac_idx] +=
(q_comp_value - tree.coordination_stats.q_component_stats[s][idx][comp_ac_idx]) / tree.coordination_stats.n_component_stats[s][idx][comp_ac_idx]
end
end
end
function init_statistics!(tree::FVMCTSTree{S,A,VarElStatistics{S}}, planner::FVMCTSPlanner,
s::S) where {S,A}
n_comps = length(tree.coordination_stats.coord_graph_components)
n_component_stats = Vector{Vector{Int64}}(undef, n_comps)
q_component_stats = Vector{Vector{Float64}}(undef, n_comps)
n_agents = length(s)
# TODO: Could actually make actions state-dependent if need be
for (idx, comp) in enumerate(tree.coordination_stats.coord_graph_components)
n_comp_actions = prod([length(tree.all_agent_actions[c]) for c in comp])
n_component_stats[idx] = Vector{Int64}(undef, n_comp_actions)
q_component_stats[idx] = Vector{Float64}(undef, n_comp_actions)
comp_tup = Tuple(1:length(tree.all_agent_actions[c]) for c in comp)
for comp_ac_idx = 1:n_comp_actions
# Generate action subcomponent and call init_Q and init_N for it
ct_idx = CartesianIndices(comp_tup)[comp_ac_idx] # Tuple corresp to
local_action = [tree.all_agent_actions[c][ai] for (c, ai) in zip(comp, Tuple(ct_idx))]
# NOTE: init_N and init_Q are functions of component AND local action
# TODO(jkg): init_N and init_Q need to be defined
n_component_stats[idx][comp_ac_idx] = init_N(planner.solver.init_N, planner.mdp, s, comp, local_action)
q_component_stats[idx][comp_ac_idx] = init_Q(planner.solver.init_Q, planner.mdp, s, comp, local_action)
end
end
# Update tree member
lock(tree.lock) do
tree.coordination_stats.n_component_stats[s] = n_component_stats
tree.coordination_stats.q_component_stats[s] = q_component_stats
end
end
@inline function _best_actionindex_empty(potential_fns, state_dep_actions, all_agent_actions, ag_idx)
# NOTE: This is inefficient but necessary for state-dep actions?
if length(state_dep_actions[ag_idx]) == length(all_agent_actions[ag_idx])
_, best_ac_idx = findmax(potential_fns[[ag_idx]])
else
# Now we need to choose the best index from among legal actions
# Create an array with illegal actions having -Inf and then fill legal vals
# TODO: More efficient way to do this?
masked_action_vals = fill(-Inf, length(all_agent_actions[ag_idx]))
for (iac, ac) in enumerate(all_agent_actions[ag_idx])
if ac in state_dep_actions[ag_idx]
masked_action_vals[iac] = potential_fns[[ag_idx]][iac]
end
end
_, best_ac_idx = findmax(masked_action_vals)
end
return best_ac_idx
end
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | code | 1933 | using POMDPs
using FactoredValueMCTS
using MultiAgentSysAdmin
using MultiUAVDelivery
using Test
@testset "FactoredValueMCTS.jl" begin
@testset "varel" begin
@testset "sysadmin" begin
@testset "local" begin
mdp = BiSysAdmin{false}()
solver = FVMCTSSolver()
planner = solve(solver, mdp)
s = rand(initialstate(mdp))
a = action(planner, s)
@test a isa actiontype(mdp)
end
@testset "global" begin
mdp = BiSysAdmin{true}()
solver = FVMCTSSolver()
planner = solve(solver, mdp)
s = rand(initialstate(mdp))
a = action(planner, s)
@test a isa actiontype(mdp)
end
end
end
@testset "maxplus" begin
@testset "sysadmin" begin
@testset "local" begin
mdp = BiSysAdmin{false}()
solver = FVMCTSSolver(;coordination_strategy=MaxPlus())
planner = solve(solver, mdp)
s = rand(initialstate(mdp))
a = action(planner, s)
@test a isa actiontype(mdp)
end
@testset "global" begin
mdp = BiSysAdmin{true}()
solver = FVMCTSSolver(;coordination_strategy=MaxPlus())
planner = solve(solver, mdp)
s = rand(initialstate(mdp))
a = action(planner, s)
@test a isa actiontype(mdp)
end
end
@testset "uav" begin
mdp = FirstOrderMultiUAVDelivery()
solver = FVMCTSSolver(;coordination_strategy=MaxPlus())
planner = solve(solver, mdp)
s = rand(initialstate(mdp))
a = action(planner, s)
@test a isa actiontype(mdp)
end
end
end
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | docs | 2539 | # FactoredValueMCTS
[](https://github.com/JuliaPOMDP/FactoredValueMCTS.jl/actions/workflows/ci.yml)
[](http://codecov.io/github/JuliaPOMDP/FactoredValueMCTS.jl?branch=master)
[](https://juliapomdp.github.io/FactoredValueMCTS.jl/stable)
[](https://juliapomdp.github.io/FactoredValueMCTS.jl/dev)
This package implements the Monte Carlo Tree Search (MCTS) planning algorithm for Multi-Agent MDPs. The algorithm factorizes the true action value function, based on the locality of interactions between agents that is encoded with a Coordination Graph. We implement two schemes for coordinating the actions for the team of agents during the MCTS computations. The first is the iterative message-passing MaxPlus, while the second is the exact Variable Elimination. We thus get two different Factored Value MCTS algorithms, FV-MCTS-MaxPlus and FV-MCTS-VarEl respectively.
The full FV-MCTS-MaxPlus algorithm is described in our AAMAS 2021 paper _Scalable Anytime Planning for Multi-Agent MDPs_ ([Arxiv](https://arxiv.org/abs/2101.04788)). The FV-MCTS-Varel is based on the Factored Statistics algorithm from the AAAI 2015 paper _Scalable Planning and Learning from Multi-Agent POMDPs_ ([Extended Version](https://arxiv.org/abs/1404.1140)) applied to Multi-Agent MDPs rather than POMDPs. We use the latter as a baseline and show how the former outperforms it on two distinct simulated domains.
To use our solver, the domain must implement the interface from [MultiAgentPOMDPs.jl](https://github.com/JuliaPOMDP/MultiAgentPOMDPs.jl). For examples, please see [MultiAgentSysAdmin](https://github.com/JuliaPOMDP/MultiAgentSysAdmin.jl) and [MultiUAVDelivery](https://github.com/JuliaPOMDP/MultiUAVDelivery.jl), which are the two domains from our AAMAS 2021 paper. Experiments from the paper are available at https://github.com/rejuvyesh/FVMCTS_experiments.
## Installation
```julia
using Pkg
Pkg.add("FactoredValueMCTS")
```
## Citation
```
@inproceedings{choudhury2021scalable,
title={Scalable Anytime Planning for Multi-Agent {MDP}s},
author={Shushman Choudhury and Jayesh K Gupta and Peter Morales and Mykel J Kochenderfer},
booktitle={International Conference on Autonomous Agents and MultiAgent Systems},
year={2021}
}
``` | FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.2.1 | 91586a373b7d8b7272643ab92805c48a3709d7b7 | docs | 131 | ```@meta
CurrentModule = FactoredValueMCTS
```
# FactoredValueMCTS
```@index
```
```@autodocs
Modules = [FactoredValueMCTS]
```
| FactoredValueMCTS | https://github.com/JuliaPOMDP/FactoredValueMCTS.jl.git |
|
[
"MIT"
] | 0.1.0 | 6ee30f61e9eed2d9799a214de6aaf3b2d8357f7d | code | 200 | using CxxWrap, Libdl
run(`cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=$(CxxWrap.prefix_path()) -DJulia_EXECUTABLE=$(joinpath(Sys.BINDIR,"julia")) .`)
run(`cmake --build . --config Release`)
| DejaVu | https://github.com/laurentbartholdi/DejaVu.jl.git |
|
[
"MIT"
] | 0.1.0 | 6ee30f61e9eed2d9799a214de6aaf3b2d8357f7d | code | 672 | module DejaVu
using CxxWrap, Libdl, Graphs
export StaticGraph, add_vertex!, add_edge!,
Solver, set_print, automorphisms!, group_size,
Orbit, orbit_size, find_orbit, represents_orbit, combine_orbits, are_in_same_orbit, reset
get_path() = joinpath(@__DIR__,"../deps/lib","libdejavu.$(Libdl.dlext)")
@wrapmodule get_path
function __init__()
@initcxx
end
function StaticGraph(g::AbstractGraph,color=i->0)
sg = StaticGraph()
initialize_graph!(sg,nv(g),ne(g))
for i=1:nv(g)
add_vertex!(sg,isa(color,Function) ? color(i) : color[i],degree(g,i))
end
for e=edges(g)
add_edge!(sg,minmax(e.src-1,e.dst-1)...)
end
sg
end
end
| DejaVu | https://github.com/laurentbartholdi/DejaVu.jl.git |
|
[
"MIT"
] | 0.1.0 | 6ee30f61e9eed2d9799a214de6aaf3b2d8357f7d | code | 454 | using Test, DejaVu, Graphs
@testset "Polygon" begin
c = cycle_graph(10)
@test nv(c)==ne(c)==10
d = Solver()
set_print(d,false)
g = StaticGraph(c)
automorphisms!(d,g)
@test group_size(d) |> String == "2*10^1"
g = StaticGraph(c,==(1))
automorphisms!(d,g)
@test group_size(d) |> String == "2*10^0"
o = Orbit(10)
automorphisms!(d,g,o)
@test [orbit_size(o,i) for i=0:9] == [1,2,2,2,2,1,2,2,2,2]
end
| DejaVu | https://github.com/laurentbartholdi/DejaVu.jl.git |
|
[
"MIT"
] | 0.1.0 | 6ee30f61e9eed2d9799a214de6aaf3b2d8357f7d | docs | 1307 | # DejaVu
This is a minimalist integration of the graph automorphism solver [DejaVu](https://automorphisms.org).
Sample run:
```julia
julia> using DejaVu, Graphs
julia> c = cycle_graph(10) # from Graphs
{10, 10} undirected simple Int64 graph
julia> g = StaticGraph(c)
DejaVu.StaticGraphAllocated(Ptr{Nothing} @0x0000600000b13840)
julia> d = Solver()
DejaVu.SolverAllocated(Ptr{Nothing} @0x0000600002737f30)
julia> automorphisms!(d,g)
preprocessing
______________________________________________________________
T (ms) delta(ms) proc p1 p2
______________________________________________________________
0.08 0.08 regular 10 20
solving_component 1/1 (n=10)
______________________________________________________________
T (ms) delta(ms) proc p1 p2
______________________________________________________________
0.14 0.03 sel 2 39
0.18 0.04 dfs 2-0 ~2*10^1
0.19 0.01 done 1 2
julia> group_size(d) |> String
"2*10^1"
```
[](https://github.com/laurentbartholdi/DejaVu.jl/actions/workflows/CI.yml?query=branch%3Amain)
| DejaVu | https://github.com/laurentbartholdi/DejaVu.jl.git |
|
[
"MIT"
] | 0.1.0 | 6ee30f61e9eed2d9799a214de6aaf3b2d8357f7d | docs | 2973 | # Documentation
This is the source code documentation for dejavu 2.0. It contains detailed descriptions for all classes and methods.
If you are looking for a guide to get started, please instead refer to our [get started guide](https://automorphisms.org) on our main page.
## Potentially Interesting Pages
Below, we list some potentially interesting pages of this documentation.
* [solver](@ref dejavu::solver): the dejavu solver, used to compute automorphisms of a graph
* [static_graph](@ref dejavu::static_graph): the graph interface
* [refinement](@ref dejavu::ir::refinement): the color refinement algorithm
* [orbit](@ref dejavu::groups::orbit): can be used to compute orbits
* [hooks](@ref dejavu::hooks): hooks are used to interact with the computed symmetries
* [random_schreier](@ref dejavu::groups::random_schreier): a implementation of the random Schreier algorithm
## Bug Reports & Feedback
If you come across any bugs or have any feedback to share, please always feel free to reach out to me at `markus (at) automorphisms.org`.
## How does it work?
An up-to-date description of the algorithms will be available in Markus Anders' PhD thesis.
The underlying algorithms are based on a series of papers by Markus Anders and Pascal Schweitzer, in particular
* Search Problems in Trees with Symmetries, ICALP 2021
* Engineering a Fast Probabilistic Isomorphism Test, ALENEX 2021
* Parallel Computation of Combinatorial Symmetries, ESA 2021
The preprocessing routines are described in
* Engineering a Preprocessor for Symmetry Detection, SEA 2023, additionally co-authored by Julian Stieß
## Copyright & License
The solver is released under the MIT license. For more information, see the [license](LICENSE_source.html).
## Compilation
Using *cmake*, the project should compile without any further dependencies:
```text
cmake .
make
```
Compilation produces a binary *dejavu*. It accepts a DIMACS graph as input, and computes the automorphism group of the graph. For available options and more descriptions, please refer to our [guide](https://automorphisms.org/quick_start/standalone/).
## Use dejavu as a library
dejavu is a header-only library. You can simply add dejavu to your C++ project by including the respective header file:
```cpp
#include "dejavu.h"
```
Note that currently, dejavu requires to be *compiled with C++ version 14*. For a more thorough description, please refer to our [guide](https://automorphisms.org/quick_start/cpp_api/).
By default, dejavu is compiled without assertions. We recommend activating assertions for debugging purposes (by adding the definition `DEJDEBUG`). Assertions do however slow the code considerably.
## Running the tests
Using *cmake*, a test target `dejavu_test` can be produced by setting the following flag:
```text
cmake . -DCOMPILE_TEST_SUITE=1
```
In order to run all the tests, the [test graphs](https://automorphisms.org/graphs/graphs.zip) are required to be placed into `tests/graphs/`.
| DejaVu | https://github.com/laurentbartholdi/DejaVu.jl.git |
|
[
"MIT"
] | 0.1.0 | 6ee30f61e9eed2d9799a214de6aaf3b2d8357f7d | docs | 1486 | # Get started & Documentation
Please refer to our [get started guide](https://automorphisms.org/). There is also a [full documentation](https://automorphisms.org/documentation/), which can be built from the code using [doxygen](https://www.doxygen.nl/).
## Compilation
Using *cmake*, the project should compile without any further dependencies:
```text
cmake .
make
```
Compilation produces a binary *dejavu*. It accepts a DIMACS graph as input, and computes the automorphism group of the graph. For available options and more descriptions, please refer to our [guide](https://automorphisms.org/quick_start/standalone/).
## Use dejavu as a library
dejavu is a header-only library. You can simply add dejavu to your C++ project by including the respective header file:
```cpp
#include "dejavu.h"
```
Note that currently, dejavu requires to be *compiled with C++ version 14*. For a more thorough description, please refer to our [guide](https://automorphisms.org/quick_start/cpp_api/).
By default, dejavu is compiled without assertions. We recommend activating assertions for debugging purposes (by adding the definition `DEJDEBUG`). Assertions do however slow the code considerably.
## Running the tests
Using *cmake*, a test target `dejavu_test` can be produced by setting the following flag:
```text
cmake . -DCOMPILE_TEST_SUITE=1
```
In order to run all the tests, the [test graphs](https://automorphisms.org/graphs/graphs.zip) are required to be placed into `tests/graphs/`. | DejaVu | https://github.com/laurentbartholdi/DejaVu.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 1048 | using SetProg
using Documenter, Literate
const EXAMPLES_DIR = joinpath(@__DIR__, "src", "examples")
const OUTPUT_DIR = joinpath(@__DIR__, "src/generated")
const EXAMPLES = readdir(EXAMPLES_DIR)
for example in EXAMPLES
example_filepath = joinpath(EXAMPLES_DIR, example)
Literate.markdown(example_filepath, OUTPUT_DIR)
Literate.notebook(example_filepath, OUTPUT_DIR)
Literate.script(example_filepath, OUTPUT_DIR)
end
makedocs(
sitename = "SetProg",
# See https://github.com/JuliaDocs/Documenter.jl/issues/868
format = Documenter.HTML(prettyurls = get(ENV, "CI", nothing) == "true"),
# See https://github.com/jump-dev/JuMP.jl/issues/1576
strict = true,
pages = [
"Index" => "index.md",
"Tutorials" => map(
file -> joinpath("generated", file),
filter(
file -> endswith(file, ".md"),
sort(readdir(OUTPUT_DIR)),
),
),
],
)
deploydocs(
repo = "github.com/blegat/SetProg.jl.git",
push_preview = true,
)
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 6079 | using Test #src
# # Continuous-time Controlled Invariant Set
#
#md # [](@__BINDER_ROOT_URL__/generated/continuous_controlled.ipynb)
#md # [](@__NBVIEWER_ROOT_URL__/generated/continuous_controlled.ipynb)
#
# ## Introduction
#
# This reproduces reproduces the numerical result of the Section 4 of [LJ21].
#
# This example considers the continuous-time constrained linear control system:
# ```math
# \begin{aligned}
# \dot{x}_1(t) & = x_2(t)\\
# \dot{x}_2(t) & = u(t)
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^2$ and input constraint $u \in [-1, 1]$.
#
# In order to compute controlled invariant sets for this system, we consider
# the projection onto the first two dimensions of controlled invariant sets of the
# following lifted system:
# ```math
# \begin{aligned}
# \dot{x}_1(t) & = x_2(t)\\
# \dot{x}_2(t) & = x_3(t)\\
# \dot{x}_3(t) & = u(t)
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^3$.
#
# The matricial form of this system is given by $\dot{x}(t) = Ax(t) + Bu(t)$ where `A` and `B` are as defined below.
# As shown in Proposition 5 of [LJ21], a set is controlled invariant for this system if and only if it is invariant for the algebraic system
# ```math
# \begin{aligned}
# \dot{x}_1(t) & = x_2(t)\\
# \dot{x}_2(t) & = x_3(t)
# \end{aligned}
# ```
# The matricial form of this system is given by $E\dot{x}(t) = Cx(t)$ where
#
# [LJ21] B. Legat and R. M. Jungers.
# *Continuous-time controlled invariant sets, a geometric approach*.
# 7th IFAC Conference on Analysis and Design of Hybrid Systems ADHS 2021, **2021**.
A = [0.0 1.0 0.0
0.0 0.0 1.0
0.0 0.0 0.0]
B = reshape([0.0, 0.0, 1.0], 3, 1)
E = [1.0 0.0 0.0
0.0 1.0 0.0]
C = A[1:2, :]
# The invariance of a set $S$ for this system is characterized by the following condition (see Proposition 7 of [LJ21]):
# ```math
# \forall x \in \partial S, \exists y \in T_S(x), Ey = Cx.
# ```
# The search for a set satisfying this condition can be formulated as the following set program; see [L20] for an intoduction to set programming.
#
# [L20] Legat, B. (2020). *Set programming : theory and computation*. Ph.D. thesis, UCLouvain.
using SetProg
function maximal_invariant(family, γ = nothing; dirs=dirs)
model = Model(sdp_solver)
@variable(model, S, family)
@constraint(model, S ⊆ □_3)
x = boundary_point(S, :x)
@constraint(model, C * x in E * tangent_cone(S, x))
S_2 = project(S, 1:2)
if γ === nothing
@variable(model, γ)
end
for point in dirs
@constraint(model, γ * point in S_2)
end
@show γ
@objective(model, Max, γ)
@show JuMP.objective_function(model)
JuMP.optimize!(model)
@show solve_time(model)
@show JuMP.termination_status(model)
@show JuMP.objective_value(model)
if JuMP.termination_status(model) == MOI.OPTIMAL
return JuMP.value(S), JuMP.objective_value(model)
else
return
end
end
import GLPK
lp_solver = optimizer_with_attributes(GLPK.Optimizer, MOI.Silent() => true, "presolve" => GLPK.GLP_ON)
import CSDP
sdp_solver = optimizer_with_attributes(CSDP.Optimizer, MOI.Silent() => true)
using Polyhedra
interval = HalfSpace([1.0], 1.0) ∩ HalfSpace([-1.0], 1.0)
lib = Polyhedra.DefaultLibrary{Float64}(lp_solver)
□_2 = polyhedron(interval * interval, lib)
□_3 = □_2 * interval
dirs = [[-1 + √3, -1 + √3], [-1, 1]]
all_dirs = [dirs; (-).(dirs)]
inner = polyhedron(vrep(all_dirs), lib)
outer = polar(inner)
# ## Ellipsoidal template
#
# We start with the ellipsoidal template. We consider two different objectives (see Section 4.2 of [L20]):
# * the volume of the set (which corresponds to $\log(\det(Q))$ or $\sqrt[n]{\det(Q)}$ in the objective function) and
# * the sum of the squares of the length of its semi-axes of the polar (which corresponds to the trace of $Q$ in the objective function).
sol_ell, γ_ell = maximal_invariant(Ellipsoid(symmetric=true))
using Plots
function hexcolor(rgb::UInt32)
r = ((0xff0000 & rgb) >> 16) / 255
g = ((0x00ff00 & rgb) >> 8) / 255
b = ((0x0000ff & rgb) ) / 255
Plots.RGBA(r, g, b)
end # Values taken from http://www.toutes-les-couleurs.com/code-couleur-rvb.php
lichen = hexcolor(0x85c17e)
canard = hexcolor(0x048b9a)
aurore = hexcolor(0xffcb60)
frambo = hexcolor(0xc72c48)
cols = [canard, frambo]
x2 = range(0, stop=1, length=20)
x1 = 1 .- x2.^2 / 2
upper = [[[-1, 1]]; [[x1[i], x2[i]] for i in eachindex(x2)]]
mci = polyhedron(vrep([upper; (-).(upper)]), lib)
polar_mci = polar(mci)
SetProg.Sets.print_support_function(project(sol_ell, 1:2))
# We can plot the primal solution as follows:
function primal_plot(set, γ=nothing; npoints=256, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05), args...)
plot(ratio=:equal, tickfont=Plots.font(12); xlim=xlim, ylim=ylim, args...)
plot!(□_2, color=lichen)
plot!(mci, color=aurore)
plot!(set, color=canard, npoints=npoints)
γ === nothing || plot!(γ * inner, color=frambo)
plot!()
end
primal_plot(project(sol_ell, 1:2), γ_ell)
# and the dual plot as follows:
function polar_plot(set, γ; npoints=256, xlim=(-1.5, 1.5), ylim=(-1.5, 1.5), args...)
plot(ratio=:equal, tickfont=Plots.font(12); xlim=xlim, ylim=ylim, args...)
γ === nothing || plot!(inv(γ) * outer, color=frambo)
plot!(polar(set), color=canard, npoints=npoints)
plot!(polar_mci, color=aurore)
plot!(polar(□_2), color=lichen)
end
polar_plot(project(sol_ell, 1:2), γ_ell)
# ## Polyset template
p4, γ4 = maximal_invariant(PolySet(symmetric=true, degree=4, convex=true), 0.91)
γ4
# Below is the primal plot:
primal_plot(project(p4, 1:2), γ4)
# and here is the polar plot:
polar_plot(project(p4, 1:2), γ4)
# ## Piecewise semi-ellipsoidal template
sol_piece_◇, γ_piece_◇ = maximal_invariant(Ellipsoid(symmetric=true, piecewise=polar(□_3)), dirs=all_dirs)
γ_piece_◇
# Below is the primal plot:
primal_plot(project(sol_piece_◇, 1:2), γ_piece_◇)
# and here is the polar plot:
polar_plot(project(sol_piece_◇, 1:2), γ_piece_◇)
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 13265 | using Test #src
# # Discrete-time Controlled Invariant Set
#
#md # [](@__BINDER_ROOT_URL__/generated/discrete_controlled.ipynb)
#md # [](@__NBVIEWER_ROOT_URL__/generated/discrete_controlled.ipynb)
#
# An similar example is available in a [codeocean capsule](https://doi.org/10.24433/CO.6396918.v1).
#
# ## Introduction
#
# This example considers the linear control system already introduced in the Example 2 of [LTJ20]; see also Example 2 of [LRJ20]:
# ```math
# \begin{aligned}
# x_{k+1} & = x_k + u_k / 2\\
# u_{k+1} & = u_k'
# \end{aligned}
# ```
# with $(x_k, u_k) \in [-1, 1]^2$.
#
# The system is $x_{k+1} = Ax_k + Bu_k$ where $B = (0, 1)$ and
# ```math
# A = \begin{bmatrix}
# 1 & 1/2\\
# 0 & 0
# \end{bmatrix}.
# ```
# As shown in Example 4 of [LTJ18], a set $S \subseteq [-1, 1]^2$ is controlled invariant for this system if
# ```math
# \begin{bmatrix}
# 1 & 1/2
# \end{bmatrix}
# S \subseteq
# \begin{bmatrix}
# 1 & 0
# \end{bmatrix}
# S
# ```
#
# [LRJ20] Legat, Benoît, Saša V. Raković, and Raphaël M. Jungers.
# *Piecewise semi-ellipsoidal control invariant sets.*
# IEEE Control Systems Letters 5.3 (2020): 755-760.
#
# [LTJ18] B. Legat, P. Tabuada and R. M. Jungers.
# *Computing controlled invariant sets for hybrid systems with applications to model-predictive control*.
# 6th IFAC Conference on Analysis and Design of Hybrid Systems ADHS (2018).
# We need to pick an LP and an SDP solver, see [here](https://jump.dev/JuMP.jl/stable/installation/#Supported-solvers) for a list of available ones.
using SetProg
import GLPK
lp_solver = optimizer_with_attributes(GLPK.Optimizer, MOI.Silent() => true)
import CSDP
sdp_solver = optimizer_with_attributes(CSDP.Optimizer, MOI.Silent() => true)
A = [1.0 0.5]
E = [1.0 0.0]
using Polyhedra
lib = DefaultLibrary{Float64}(lp_solver)
h = HalfSpace([1, 0], 1.0) ∩ HalfSpace([-1, 0], 1) ∩ HalfSpace([0, 1], 1) ∩ HalfSpace([0, -1], 1)
□ = polyhedron(h, lib) # [0, 1]^2
v = convexhull([1.0, 0], [0, 1], [-1, 0], [0, -1])
◇ = polyhedron(v, lib); # polar of [0, 1]^2
# ## Polyhedral template
#
# ### Fixed point approach
#
# This section shows that the maximal control invariant set of this simple control system is polyhedral. Moreover, this polyhedron is obtained after only one fixed point iteration of the standard viability
# kernel algorithm. We implement the fixed point iteration with the following function:
function fixed_point_iteration(set::Polyhedron)
new_set = set ∩ (A \ (E * set))
removehredundancy!(new_set)
return new_set
end
# We start with $[-1, 1]^2$ and obtain the polytope `mci` after one iteration.
mci = fixed_point_iteration(□)
# One additional iteration gives the same polytope, showing that this polytope is control invariant.
fixed_point_iteration(mci)
# We plot this polytope `mci` in yellow below along with the safe set $[-1, 1]^2$ in green.
using Plots
function hexcolor(rgb::UInt32)
r = ((0xff0000 & rgb) >> 16) / 255
g = ((0x00ff00 & rgb) >> 8) / 255
b = ((0x0000ff & rgb) ) / 255
Plots.RGBA(r, g, b)
end # Values taken from http://www.toutes-les-couleurs.com/code-couleur-rvb.php
lichen = hexcolor(0x85c17e)
canard = hexcolor(0x048b9a)
aurore = hexcolor(0xffcb60)
plot(ratio=:equal, tickfont=Plots.font(12))
plot!(□, color=lichen)
plot!(mci, color=aurore)
# We also plot their respective polars. Note that the inclusion is reversed in the polar space.
polar_mci = polar(mci)
plot(ratio=:equal, tickfont=Plots.font(12))
plot!(polar_mci, color=aurore)
plot!(◇, color=lichen)
function maximal_invariant(template, heuristic::Function)
solver = if template isa Polytope
lp_solver
else
sdp_solver
end
model = Model(solver)
@variable(model, S, template)
@constraint(model, S ⊆ □)
@constraint(model, A * S ⊆ E * S)
@objective(model, Max, heuristic(volume(S)))
optimize!(model)
@show solve_time(model)
@show termination_status(model)
@show objective_value(model)
return value(S)
end
function primal_plot(set; npoints=256, args...)
plot(ratio=:equal, tickfont=Plots.font(12); args...)
plot!(□, color=lichen)
plot!(mci, color=aurore)
plot!(set, color=canard, npoints=npoints)
end
function polar_plot(set; npoints=256, args...)
plot(ratio=:equal, tickfont=Plots.font(12); args...)
plot!(SetProg.Sets.polar(set), color=canard, npoints=npoints)
plot!(polar_mci, color=aurore)
plot!(◇, color=lichen)
end
# ### Linear programming approach
#
# As introduced in [R21], given a fixed polyhedral conic partition of the state-space,
# we can search over all polyhedra for which the gauge or support function
# is linear over each piece of the partition.
# The partition can be defined by a polytope as described in [Eq. (4.6), R21].
# In this example, because the sets are represented with the support function,
# the pieces will be used for the support function/polar set and
# the pieces of the gauge function/primal will depend on the value of
# the support function on each piece.
#
# [R21] Raković, S. V.
# *Control Minkowski–Lyapunov functions*
# Automatica, Elsevier BV, 2021, 128, 109598
# Let's start with setting the partition using the cross polytope `◇`.
sol_polytope_◇ = maximal_invariant(Polytope(symmetric=true, piecewise=◇), L1_heuristic)
SetProg.Sets.print_support_function(sol_polytope_◇)
# We can see that the polar set is unbounded so the primal set has zero volume.
# More precisely, the polar is $[-1, 1] \times [-\infty, \infty]$ and the primal
# set is $[-1, 1] \times \{0\}$.
# Let's now try setting the partition using the square `□`.
sol_polytope_□ = maximal_invariant(Polytope(symmetric=true, piecewise=□), L1_heuristic)
SetProg.Sets.print_support_function(sol_polytope_□)
# The primal plot is below:
primal_plot(sol_polytope_□, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# And the polar plot is below:
polar_plot(sol_polytope_□, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# Let's now use a partition with 8 pieces.
pieces8 = convexhull(□, 1.25 * ◇)
sol_polytope_8 = maximal_invariant(Polytope(symmetric=true, piecewise=pieces8), L1_heuristic)
SetProg.Sets.print_support_function(sol_polytope_8)
# The primal plot is below:
primal_plot(sol_polytope_8, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# And the polar plot is below:
polar_plot(sol_polytope_8, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# We can use the polar of the polytope resulting from the first fixed point iteration to generate a refined conic partition.
# With this partition, the maximal piecewise semi-ellipsoidal control invariant set matches the maximal control invariant set.
sol_polytope_mci = maximal_invariant(Polytope(symmetric=true, piecewise=polar_mci), L1_heuristic)
SetProg.Sets.print_support_function(sol_polytope_mci)
# The primal plot is below:
primal_plot(sol_polytope_mci)
# And the polar plot is below:
polar_plot(sol_polytope_mci)
# ## Ellipsoidal template
#
# We now compute the maximal ellipsoidal control invariant set. Here we can either maximize its volume (which corresponds to $\log(\det(Q))$ or $\sqrt[n]{\det(Q)}$ in the objective function) or minimize the sum of the squares of the length of its semi-axes of the polar (which corresponds to the trace of $Q$ in the objective function).
# We can see below that the control invariant set of maximal volume has support function
# $$h^2(S, x) = x_1^2 - \frac{1}{2} x_1x_2 + x_2^2.$$
sol_ell_vol = maximal_invariant(Ellipsoid(symmetric=true, dimension=2), nth_root)
SetProg.Sets.print_support_function(sol_ell_vol)
# We can see below the ellipsoid in blue along with the maximal control invariant set in yellow and the safe set in green.
primal_plot(sol_ell_vol)
# Below is are the corresponding sets in the polar space.
polar_plot(sol_ell_vol)
# We can see below that the control invariant set of minimal sum of squares of the length of the semi-axes of the polar has support function
# $$h^2(S, x) = x_1^2 - \alpha x_1x_2 + x_2^2.$$
# with $\alpha \approx 1.377456$.
# Note that the sum of the squares of the length of the semi-axes of the polar is equal to the integral of $h^2(S, x)$ over the hypercube $[-1, 1]^2$ hence the use of the `L1_heuristic(vol, ones(2))` as objective which means taking the integral over the hyperrectangle with vertex `ones(2) = [1, 1]`.
sol_ell_L1 = maximal_invariant(Ellipsoid(symmetric=true, dimension=2), vol -> L1_heuristic(vol, ones(2)))
SetProg.Sets.print_support_function(sol_ell_L1)
# The primal plot is below:
primal_plot(sol_ell_L1)
# And the polar plot is below:
polar_plot(sol_ell_L1)
# ## Piecewise semi-ellipsoidal template
#
# We now study the maximal piecewise semi-ellipsoidal control invariant sets of a given conic partition.
# The volume is not directly maximized. Instead, for each cone, we compute the sum $s$ of the normalized rays and consider the polytope obtained by intersecting the cone with the halfspace $s^\top x \le \|s\|_2^2$. We integrate the quadratic form corresponding to this cone, i.e. $h^2(S, x)$ over the polytope. The sum of the integrals over each polytope is the objective function we use. This can be seen as the generalization of the sum of the squares of the semi-axes of the polar of the ellipsoid.
#
# Note that the constraint (29) of Program 1 of [LRJ20] is implemented with Proposition 2 of [LRJ20] for all results of this example.
# The conic partition are obtained by considering the conic hull of each facets of a given polytope.
# We first consider the conic partition corresponding to the polar of the safe set $[-1, 1]^2$. This gives the four quadrants as cones of the conic partition.
# The maximal piecewise semi-ellipsoidal control invariant set with this partition has the following support function:
# ```math
# h^2(S, x) = \begin{cases}
# (x_1 - x_2)^2 & \text{ if }x_1x_2 \le 0,\\
# x_1^2 - x_1x_2/2 + x_2^2 & \text{ if }x_1x_2 \ge 0.
# \end{cases}
# ```
# For the cones $x_1x_2 \le 0$, the semi-ellipsoid matches the safe set and the maximal control invariant set.
# For the cones $x_1x_2 \ge 0$, the semi-ellipsoid matches the maximal volume control invariant ellipsoid.
# This illustrates one key feature of piecewise semi-ellipsoidal sets, they can combines advantages of both polyhedra and ellipsoids. It can be polyhedral on the directions where the maximal control invariant set is polyhedral and be ellipsoidal on the directions where the maximal control invariant set is smooth or requires many halfspaces in its representation.
sol_piece_◇ = maximal_invariant(Ellipsoid(symmetric=true, piecewise=◇), L1_heuristic)
SetProg.Sets.print_support_function(sol_piece_◇)
# The primal plot is below:
primal_plot(sol_piece_◇, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# And the polar plot is below:
polar_plot(sol_piece_◇, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# Let's now try setting the partition with the square `□`.
sol_piece_□ = maximal_invariant(Ellipsoid(symmetric=true, piecewise=□), L1_heuristic)
SetProg.Sets.print_support_function(sol_piece_□)
# The primal plot is below:
primal_plot(sol_piece_□, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# And the polar plot is below:
polar_plot(sol_piece_□, xlim=(-1.6, 1.6), ylim=(-1.6, 1.6))
# Let's now try with the partition of 8 pieces.
sol_piece_8 = maximal_invariant(Ellipsoid(symmetric=true, piecewise=pieces8), L1_heuristic)
SetProg.Sets.print_support_function(sol_piece_8)
# The primal plot is below:
primal_plot(sol_piece_8, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05))
# And the polar plot is below:
polar_plot(sol_piece_8, xlim=(-1.1, 1.1), ylim=(-1.05, 1.05))
# We can use the polar of the polytope resulting from the first fixed point iteration to generate a refined conic partition.
# With this partition, the maximal piecewise semi-ellipsoidal control invariant set matches the maximal control invariant set. The support function is
# ```math
# h^2(S, x) = \begin{cases}
# (x_1 - x_2)^2 & \text{ if }x_1x_2 \le 0,\\
# x_1^2 & \text{ if }x_2(x_1-2x_2) \ge 0,\\
# (x_1/2 + x_2)^2 & \text{ if }x_1(2x_2-x_1) \ge 0.\\
# \end{cases}
# ```
sol_piece_mci = maximal_invariant(Ellipsoid(symmetric=true, piecewise=polar_mci), L1_heuristic)
SetProg.Sets.print_support_function(sol_piece_mci)
# The primal plot is below:
primal_plot(sol_piece_mci)
# And the polar plot is below:
polar_plot(sol_piece_mci)
# ## Polyset template
#
# We now use the polyset templates. As details in [LRJ20], for inclusion
# constraints of the form `A * S ⊆ E * S`, the polar representation is used
# so we need the set to be convex. For this, we set the argument `convex=true`.
# We directly try `degree=4` because `degree=2` would simply give the same as
# `sol_ell_L1`.
sol4 = maximal_invariant(PolySet(symmetric=true, degree=4, convex=true), vol -> L1_heuristic(vol, ones(2)))
SetProg.Sets.print_support_function(sol4)
# The primal plot is below:
primal_plot(sol4)
# And the polar plot is below:
polar_plot(sol4)
# Let's now try degree 6.
sol6 = maximal_invariant(PolySet(symmetric=true, degree=6, convex=true), vol -> L1_heuristic(vol, ones(2)))
SetProg.Sets.print_support_function(sol6)
# The primal plot is below:
primal_plot(sol6)
# And the polar plot is below:
polar_plot(sol6)
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 8336 | using Test #src
# # Discrete-time Switched Invariant Set
#
#md # [](@__BINDER_ROOT_URL__/generated/discrete_switched.ipynb)
#md # [](@__NBVIEWER_ROOT_URL__/generated/discrete_switched.ipynb)
#
# ## Introduction
#
# In this notebook, we compute the maximal (resp. minimal) invariant set contained in the square with vertices $(\pm 1, \pm 1)$ for the system
# ```math
# \begin{aligned}
# x_{k+1} & = -y_k\\
# y_{k+1} & = x_k.
# \end{aligned}
# ```
#
# The system is $x_{k+1} = Ax_k$ where
# ```math
# A = \begin{bmatrix}
# 0 & -1\\
# 1 & 0
# \end{bmatrix}.
# ```
# A set $S$ is controlled invariant if
# $$AS \subseteq S.$$
# We need to pick an LP and an SDP solver, see [here](https://jump.dev/JuMP.jl/stable/installation/#Supported-solvers) for a list of available ones. Run one of the following two cells to choose choose the solver.
using SetProg
import Clp
lp_solver = optimizer_with_attributes(Clp.Optimizer, MOI.Silent() => true)
import CSDP
sdp_solver = optimizer_with_attributes(CSDP.Optimizer, MOI.Silent() => true)
using Polyhedra
lib = DefaultLibrary{Float64}(lp_solver)
h = HalfSpace([1, 0], 1.0) ∩ HalfSpace([-1, 0], 1) ∩ HalfSpace([0, 1], 1) ∩ HalfSpace([0, -1], 1)
□ = polyhedron(h, lib)
◇ = polar(□)
A1 = [-1 -1
-4 0] / 4
A2 = [ 3 3
-2 1] / 4
A = [0.0 -1.0
1.0 0.0]
function maximal_invariant(template, heuristic::Function)
solver = if template isa Polytope
lp_solver
else
sdp_solver
end
model = Model(solver)
@variable(model, S, template)
@constraint(model, S ⊆ □)
@constraint(model, A1 * S ⊆ S)
@constraint(model, A2 * S ⊆ S)
@objective(model, Max, heuristic(volume(S)))
optimize!(model)
@show solve_time(model)
@show termination_status(model)
@show objective_value(model)
return value(S)
end
function minimal_invariant(template, heuristic::Function)
solver = if template isa Polytope
lp_solver
else
sdp_solver
end
model = Model(solver)
@variable(model, S, template)
@constraint(model, □ ⊆ S)
@constraint(model, A1 * S ⊆ S)
@constraint(model, A2 * S ⊆ S)
@objective(model, Min, heuristic(volume(S)))
optimize!(model)
@show solve_time(model)
@show termination_status(model)
@show objective_value(model)
@show JuMP.objective_sense(model)
return value(S)
end
using Plots
function hexcolor(rgb::UInt32)
r = ((0xff0000 & rgb) >> 16) / 255
g = ((0x00ff00 & rgb) >> 8) / 255
b = ((0x0000ff & rgb) ) / 255
Plots.RGBA(r, g, b)
end # Values taken from http://www.toutes-les-couleurs.com/code-couleur-rvb.php
lichen = hexcolor(0x85c17e)
canard = hexcolor(0x048b9a)
aurore = hexcolor(0xffcb60)
frambo = hexcolor(0xc72c48)
function primal_plot(min_sol, max_sol; npoints=256, args...)
plot(ratio=:equal, tickfont=Plots.font(12); args...)
plot!(min_sol, color=canard, npoints=npoints)
plot!(□, color=lichen)
plot!(max_sol, color=aurore, npoints=npoints)
end
function polar_plot(min_sol, max_sol; npoints=256, args...)
plot(ratio=:equal, tickfont=Plots.font(12); args...)
plot!(polar(max_sol), color=aurore, npoints=npoints)
plot!(◇, color=lichen)
plot!(polar(min_sol), color=canard, npoints=npoints)
end
# ## Polyhedral template
#
# ### Fixed point approach
#
# This section shows fixed point approach for computing a polyhedral maximal invariant set.
# We implement the fixed point iteration of the standard viability kernel algorithm with the following function.
# A more sophisticated approach specialized for discrete-time linear switched systems
# is available in [SwitchOnSafety](https://github.com/blegat/SwitchOnSafety.jl).
function backward_fixed_point_iteration(set::Polyhedron)
new_set = set ∩ (A1 \ set) ∩ (A2 \ set)
removehredundancy!(new_set)
return new_set
end
# We start with $[-1, 1]^2$ and obtain the maximal invariant set after one iteration.
max_polytope_1 = backward_fixed_point_iteration(□)
# We can see that it has already converged.
backward_fixed_point_iteration(max_polytope_1)
# We now turn to the minimal invariant set with the following iteration:
function forward_fixed_point_iteration(set::Polyhedron)
new_set = convexhull(set, A1 * set, A2 * set)
removevredundancy!(new_set)
return new_set
end
# We start with $[-1, 1]^2$ and obtain the minimal invariant set after three iteration.
min_polytope_1 = forward_fixed_point_iteration(□)
# We see below that the second iteration was indeed needed.
min_polytope_2 = forward_fixed_point_iteration(min_polytope_1)
# And the third iteration as well.
min_polytope_3 = forward_fixed_point_iteration(min_polytope_2)
# We see below that it has converged
forward_fixed_point_iteration(min_polytope_3)
# Let's see this visually
plot(ratio=:equal, tickfont=Plots.font(12))
plot!(min_polytope_3, color=frambo)
plot!(min_polytope_2, color=canard)
plot!(min_polytope_1, color=aurore)
plot!(□, color=lichen)
plot!(max_polytope_1, color=frambo)
# ### Linear programming approach
#
# As introduced in [R21], we can search over all polyhedra with a given face fan.
# The partition can be defined by the face fan of a given polytope as described in [Eq. (4.6), R21].
# For `maximal_invariant`, because the sets are represented with the support function,
# the fan will be used for the face fan of the polar set
# the face fan of the primal set will depend on the value of
# the support function on each piece.
# For `minimal_invariant`, the sets are represented with the gauge function,
# the fac will therefore be used for the face fan of the primal set.
#
# [R21] Raković, S. V.
# *Control Minkowski–Lyapunov functions*
# Automatica, Elsevier BV, 2021, 128, 109598
# We use the face fan of the polytope `pieces8` (which has the same face fan as the octogon `convexhull(□, √2 * ◇)`).
pieces8 = convexhull(□, 1.25 * ◇)
max_polytope = maximal_invariant(Polytope(symmetric=true, piecewise=pieces8), L1_heuristic)
min_polytope = minimal_invariant(Polytope(symmetric=true, piecewise=polar(pieces8)), L1_heuristic)
primal_plot(min_polytope, max_polytope)
# The polar plot is as follows:
polar_plot(min_polytope, max_polytope)
# ## Ellipsoidal template
# We now consider the ellipsoidal template.
# The ellipsoids of maximal volume are given as follows:
max_ell_vol = maximal_invariant(Ellipsoid(symmetric=true), nth_root)
min_ell_vol = minimal_invariant(Ellipsoid(symmetric=true), nth_root)
primal_plot(min_ell_vol, max_ell_vol)
# All ellipsoids are convex so we can also obtain the polar plot:
polar_plot(min_ell_vol, max_ell_vol)
# The ellipsoids of maximal sum of squared length of its axis.
max_ell_L1 = maximal_invariant(Ellipsoid(symmetric=true), vol -> L1_heuristic(vol, ones(2)))
min_ell_L1 = minimal_invariant(Ellipsoid(symmetric=true), vol -> L1_heuristic(vol, ones(2)))
primal_plot(min_ell_L1, max_ell_L1)
# And the polar plot is:
polar_plot(min_ell_vol, max_ell_vol)
# ## Polyset template
# We generalize the previous template to homogeneous polynomials of degree $2d$.
# As objective, we use the integral of the polynomial used to represent
# the set (either in the support function for `maximal_invariant`
# or in the gauge function for `minimal_invariant`) over the square `□`.
# We start with quartic polynomials.
max_4 = maximal_invariant(PolySet(symmetric=true, convex=true, degree=4), vol -> L1_heuristic(vol, ones(2)))
min_4 = minimal_invariant(PolySet(symmetric=true, degree=4), vol -> L1_heuristic(vol, ones(2)))
primal_plot(min_4, max_4)
# We now obtain the results for sextic polynomials, note that we do not
# impose the sets to be convex for `minimal_invariant`. This is why we don't
# show the polar plot for polysets.
max_6 = maximal_invariant(PolySet(symmetric=true, convex=true, degree=6), vol -> L1_heuristic(vol, ones(2)))
min_6 = minimal_invariant(PolySet(symmetric=true, degree=6), vol -> L1_heuristic(vol, ones(2)))
primal_plot(min_6, max_6)
# The nonconvexity is even more apparent for octic polynomials.
max_8 = maximal_invariant(PolySet(symmetric=true, convex=true, degree=8), vol -> L1_heuristic(vol, □))
min_8 = minimal_invariant(PolySet(symmetric=true, degree=8), vol -> L1_heuristic(vol, □))
primal_plot(min_8, max_8, npoints=1024)
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 7069 | # # Hybrid Controlled Invariant Set
#
#md # [](@__BINDER_ROOT_URL__/generated/hybrid_controlled.ipynb)
#md # [](@__NBVIEWER_ROOT_URL__/generated/hybrid_controlled.ipynb)
#
# ## Introduction
#
# This example considers the hybrid constrained linear control system:
# ```math
# \begin{aligned}
# \dot{x}_1(t) & = x_2(t)\\
# \dot{x}_2(t) & = u(t)
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^2$ and input constraint $u \in [-1, 1]$
# and the jump:
# ```math
# \begin{aligned}
# x_1^+ & = -x_1 + u/8\\
# x_2^+ & = x_2 - u/8
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^2$ and input constraint $u \in [-1, 1]$
# that can occur anytime.
#
# In order to compute controlled invariant sets for this system, we consider
# the projection onto the first two dimensions of controlled invariant sets of the
# following lifted system:
# ```math
# \begin{aligned}
# \dot{x}_1(t) & = x_2(t)\\
# \dot{x}_2(t) & = x_3(t)\\
# \dot{x}_3(t) & = u(t)
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^3$;
# with a first jump to a temporary mode:
# ```math
# \begin{aligned}
# x_1^+ & = x_1\\
# x_2^+ & = x_2\\
# x_3^+ & = u
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^3$ and unconstrained input;
# and a second jump back to the original mode:
# ```math
# \begin{aligned}
# x_1^+ & = -x_1 + x_3/8\\
# x_2^+ & = x_2 - x_3/8\\
# x_3^+ & = u.
# \end{aligned}
# ```
# Note that the input `u` chosen in the first jump is the input that will be used for
# the reset map and the input `u` chosen for the second jump is the input that will be used
# for the state `x_3` of the continuous-time system.
#
# The matricial form of this system is given by $\dot{x}(t) = Ax(t) + Bu(t)$ where `A` and `B` are as defined below.
# As shown in Proposition 5 of [LJ21], a set is controlled invariant for this system if and only if it is weakly invariant for the algebraic system
# ```math
# \begin{aligned}
# \dot{x}_1(t) & = x_2(t)\\
# \dot{x}_2(t) & = x_3(t)
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^3$;
# with a first jump to a temporary mode:
# ```math
# \begin{aligned}
# x_1^+ & = x_1\\
# x_2^+ & = x_2
# \end{aligned}
# ```
# with state constraint $x \in [-1, 1]^3$
# and a second jump back to the original mode:
# ```math
# \begin{aligned}
# x_1^+ & = -x_1 + x_3/8\\
# x_2^+ & = x_2 - x_3/8.
# \end{aligned}
# ```
#
# The matricial form of this system is given by $E\dot{x}(t) = Cx(t)$ with a first jump
# $Ex^+ = Ex$ and a second jump $Ex^+ = Ux$ where:
#
# [LJ21] B. Legat and R. M. Jungers.
# *Continuous-time controlled invariant sets, a geometric approach*.
# 7th IFAC Conference on Analysis and Design of Hybrid Systems ADHS 2021, **2021**.
A = [0.0 1.0 0.0
0.0 0.0 1.0
0.0 0.0 0.0]
B = reshape([0.0, 0.0, 1.0], 3, 1)
E = [1.0 0.0 0.0
0.0 1.0 0.0]
C = A[1:2, :]
U = [-1.0 0.0 1/4
0.0 1.0 -1/4]
using SetProg
function maximal_invariant(family, γ = nothing; dirs=dirs)
model = Model(sdp_solver)
@variable(model, S, family)
@constraint(model, S ⊆ □_3)
@variable(model, T, family)
@constraint(model, T ⊆ □_3)
x = boundary_point(S, :x)
@constraint(model, C * x in E * tangent_cone(S, x))
@constraint(model, E * S ⊆ E * T)
@constraint(model, U * T ⊆ E * S)
S_2 = project(S, 1:2)
if γ === nothing
@variable(model, γ)
end
@constraint(model, [point in dirs], γ * point in S_2)
@objective(model, Max, γ)
JuMP.optimize!(model)
@show solve_time(model)
@show JuMP.termination_status(model)
@show JuMP.objective_value(model)
if JuMP.termination_status(model) == MOI.OPTIMAL
return JuMP.value(S), JuMP.objective_value(model)
else
return
end
end
import GLPK
lp_solver = optimizer_with_attributes(GLPK.Optimizer, MOI.Silent() => true, "presolve" => GLPK.GLP_ON)
import CSDP
sdp_solver = optimizer_with_attributes(CSDP.Optimizer, MOI.Silent() => true)
using Polyhedra
interval = HalfSpace([1.0], 1.0) ∩ HalfSpace([-1.0], 1.0)
lib = Polyhedra.DefaultLibrary{Float64}(lp_solver)
□_2 = polyhedron(interval * interval, lib)
□_3 = □_2 * interval
u_max = 1/8
x2l0 = √(2u_max) - u_max
dirs = [[-1 + √3, -1 + √3], [-1/2, 1.0], [-1.0, x2l0]]
all_dirs = [dirs; (-).(dirs)]
inner = polyhedron(vrep(all_dirs), lib)
outer = polar(inner)
# ## Ellipsoidal template
#
# We start with the ellipsoidal template. The objective consider is to maximize `γ`
# such that `γ * inner` is included in the set.
sol_ell, γ_ell = maximal_invariant(Ellipsoid(symmetric=true))
using Plots
function hexcolor(rgb::UInt32)
r = ((0xff0000 & rgb) >> 16) / 255
g = ((0x00ff00 & rgb) >> 8) / 255
b = ((0x0000ff & rgb) ) / 255
Plots.RGBA(r, g, b)
end # Values taken from http://www.toutes-les-couleurs.com/code-couleur-rvb.php
lichen = hexcolor(0x85c17e)
canard = hexcolor(0x048b9a)
aurore = hexcolor(0xffcb60)
frambo = hexcolor(0xc72c48)
cols = [canard, frambo]
x2 = range(0, stop=1, length=20)
x1 = 1 .- x2.^2 / 2
x2l = range(x2l0, stop=1.0 - u_max, length=20)
x1l = -(1 .- (x2l .+ u_max).^2 / 2 .+ u_max)
upper = [[[-1/2, 1.0]]; [[x1l[i], x2l[i]] for i in eachindex(x2l)]; [[x1[i], x2[i]] for i in eachindex(x2)]]
mci = polyhedron(vrep([upper; (-).(upper)]), lib)
polar_mci = polar(mci)
SetProg.Sets.print_support_function(project(sol_ell, 1:2))
# We can plot the primal solution as follows:
function primal_plot(set, γ=nothing; npoints=256, xlim=(-1.05, 1.05), ylim=(-1.05, 1.05), args...)
plot(ratio=:equal, tickfont=Plots.font(12); xlim=xlim, ylim=ylim, args...)
plot!(□_2, color=lichen)
plot!(mci, color=aurore)
plot!(set, color=canard, npoints=npoints)
γ === nothing || plot!(γ * inner, color=frambo)
plot!()
end
primal_plot(project(sol_ell, 1:2), γ_ell)
# and the dual plot as follows:
function polar_plot(set, γ; npoints=256, xlim=(-1.5, 1.5), ylim=(-1.5, 1.5), args...)
plot(ratio=:equal, tickfont=Plots.font(12); xlim=xlim, ylim=ylim, args...)
γ === nothing || plot!(inv(γ) * outer, color=frambo)
plot!(polar(set), color=canard, npoints=npoints)
plot!(polar_mci, color=aurore)
plot!(polar(□_2), color=lichen)
end
polar_plot(project(sol_ell, 1:2), γ_ell)
# ## Polyset template
# We start with quartic polynomials:
p4, γ4 = maximal_invariant(PolySet(symmetric=true, degree=4, convex=true), 0.896)
γ4
# Below is the primal plot:
primal_plot(project(p4, 1:2), γ4)
# and here is the polar plot:
polar_plot(project(p4, 1:2), γ4)
# We now try it with sextic polynomials:
p6, γ6 = maximal_invariant(PolySet(symmetric=true, degree=6, convex=true), 0.93)
γ6
# Below is the primal plot:
primal_plot(project(p6, 1:2), γ6)
# and here is the polar plot:
polar_plot(project(p6, 1:2), γ6)
# We now try it with octic polynomials:
p8, γ8 = maximal_invariant(PolySet(symmetric=true, degree=8, convex=true), 0.96)
γ8
# Below is the primal plot:
primal_plot(project(p8, 1:2), γ8)
# and here is the polar plot:
polar_plot(project(p8, 1:2), γ8)
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 871 | function hexcolor(rgb::UInt32)
r = ((0xff0000 & rgb) >> 16) / 255
g = ((0x00ff00 & rgb) >> 8) / 255
b = ((0x0000ff & rgb) ) / 255
Plots.RGBA(r, g, b)
end
# Values taken from http://www.toutes-les-couleurs.com/code-couleur-rvb.php
troyes = hexcolor(0xfefdf0)
#troyes = Plots.RGBA(0.9961, 0.9922, 0.9412)
frambo = hexcolor(0xc72c48)
#frambo = Plots.RGBA(0.7804, 0.1725, 0.2824)
lichen = hexcolor(0x85c17e)
#lichen = Plots.RGBA(0.5216, 0.7569, 0.4941)
canard = hexcolor(0x048b9a)
#canard = Plots.RGBA(0.0157, 0.5451, 0.6039)
aurore = hexcolor(0xffcb60)
# colors taken from https://previews.123rf.com/images/capacitorphoto/capacitorphoto1410/capacitorphoto141000191/32438941-Graph-Icon-color-set-illustration--Stock-Vector.jpg
red = hexcolor(0xf59297)
gre = hexcolor(0xcbdf80)
blu = hexcolor(0x2394ce)
ora = hexcolor(0xfacb95)
yel = hexcolor(0xf2f08b)
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 10882 | struct L1Heuristic{S} <: AbstractScalarFunction
set::S
support # support of L1 integral
end
Base.copy(l::L1Heuristic) = l
L1_heuristic(volume::Volume, support=nothing) = L1Heuristic(volume.set, support)
Base.show(io::IO, l::L1Heuristic) = print(io, "L1-heuristic(", l.set, ")")
set_space(space::Space, ::L1Heuristic, ::JuMP.Model) = space
function power_integrate(exponent, bound)
exp = exponent + 1
@assert isodd(exp)
return (2/exp) * bound^exp
end
function rectangle_integrate(m::MP.AbstractMonomialLike,
vertex)
exp = MP.exponents(m)
if any(isodd, exp)
return zero(power_integrate(2, vertex[1]))
else
@assert length(exp) == length(vertex)
return prod(power_integrate.(exp, vertex))
end
end
function rectangle_integrate(t::MP.AbstractTermLike,
vertex)
return MP.coefficient(t) * rectangle_integrate(MP.monomial(t), vertex)
end
function rectangle_integrate(p::MP.AbstractPolynomialLike,
vertex)
return sum(rectangle_integrate(t, vertex) for t in MP.terms(p))
end
struct PowerOfLinearForm
coefficients::Vector{Int}
power::Int
end
struct Decomposition
coefficients::Vector{Float64}
forms::Vector{PowerOfLinearForm}
end
using Combinatorics
using Polyhedra
using LinearAlgebra
# See (6) of [BBDKV11]
function _linear_simplex(l::PowerOfLinearForm, s)
ls = [l.coefficients'si for si in points(s)]
sum(prod(i -> ls[i]^k[i], eachindex(ls)) for k in multiexponents(npoints(s), l.power))
end
function _integrate(l::PowerOfLinearForm, s)
current = Polyhedra.unscaled_volume_simplex(s) * _linear_simplex(l, s)
# Should multiply by `factorial(l.power) / factorial(l.power + fulldim(s))`
# but we want computing `factorial` as it needs big integer for arguments above 20.
for i in (l.power + 1):(l.power + fulldim(s))
current /= i
end
return current
end
function integrate(l::PowerOfLinearForm, s, cache::Dict{PowerOfLinearForm, Float64})
if !haskey(cache, l)
cache[l] = _integrate(l, s)
end
return cache[l]
end
function integrate_simplex!(ints::Vector{Float64}, decs::Vector{Decomposition}, simplex,
cache = Dict{PowerOfLinearForm, Float64}())
empty!(cache)
for (i, dec) in enumerate(decs)
ints[i] += sum(dec.coefficients[j] * integrate(dec.forms[j], simplex, cache)
for j in eachindex(dec.forms))
end
end
function integrate_decompositions(decs::Vector{Decomposition}, polytope::Polyhedron,
cache = Dict{PowerOfLinearForm, Float64}())
integrals = zeros(length(decs))
for Δ in Polyhedra.triangulation(polytope)
integrate_simplex!(integrals, decs, Δ, cache)
end
return integrals
end
function integrate_monomials(monos::AbstractVector{<:MP.AbstractMonomial}, polytope::Polyhedron)
return integrate_decompositions(decompose.(monos), polytope)
end
function integrate(p::MP.AbstractPolynomial, polytope::Polyhedron)
return MP.coefficients(p)'integrate_monomials(MP.monomials(p), polytope)
end
function integrate_gauge_like(set, polytope, decs, val, cache)
integrals = integrate_decompositions(decs, polytope, cache)
return evaluate_monomials(exps -> integrals[val[exps]], set)
end
function l1_integral(set, polytope::Polyhedra.Polyhedron)
decs = Decomposition[]
val = Dict(exps => length(push!(decs, decompose(exps)))
for exps in all_exponents(set))
cache = Dict{PowerOfLinearForm, Float64}()
return integrate_gauge_like(set, polytope, decs, val, cache)
end
"""
As studied in [DPW96].
[DPW96] C. Durieu, B. T. Polyak and E. Walter.
*Trace versus determinant in ellipsoidal outer-bounding, with application to
state estimation*.
IFAC Proceedings Volumes, **1996**.
"""
function l1_integral(ell::Sets.Ellipsoid, vertex::AbstractVector)
n = Sets.dimension(ell)
@assert n == length(vertex)
# The integral of off-diagonal entries xy are zero between the rectangle
# is symmetric
# the integral of x^2 is (x^3 - (-x^3)) / 3 = 2/3 * x^3
c = 2/3
return sum(i -> ell.Q[i, i] * c * vertex[i]^2, 1:n)
end
"""
As developed in [HL12].
[HL12] D. Henrion and C. Louembet.
*Convex inner approximations of nonconvex semialgebraic sets applied to
fixed-order controller design*.
International Journal of Control, **2012**.
"""
function l1_integral(set::Union{Sets.PolySet,
Sets.ConvexPolySet},
vertex::AbstractVector)
return rectangle_integrate(set.p, vertex)
end
# See (13) of [BBDKV11]
#[BBDKV11] Baldoni, V., Berline, N., De Loera, J., Köppe, M., & Vergne, M. (2011).
# How to integrate a polynomial over a simplex. Mathematics of Computation, 80(273), 297-325.
function decompose(exps::Vector{Int})
f(i) = 0:i
d = sum(exps)
# `factorial` is limited to `d <= 20`.
deno = prod(Float64, 1:d)
coefficients = Float64[]
forms = PowerOfLinearForm[]
for p in Iterators.product(f.(exps)...)
dp = sum(p)
iszero(dp) && continue
coef = prod(i -> binomial(exps[i], p[i]), eachindex(p))
if isodd(d - dp)
coef = -coef
end
push!(coefficients, coef / deno)
push!(forms, PowerOfLinearForm(collect(p), d))
end
return Decomposition(coefficients, forms)
end
decompose(mono::MP.AbstractMonomial) = decompose(MP.exponents(mono))
function all_exponents(set::Union{Sets.PolySet, Sets.ConvexPolySet})
return MP.exponents.(MP.monomials(Sets.space_variables(set), set.degree))
end
function ell_exponents(i, j, n)
exps = zeros(Int, n)
exps[i] += 1
exps[j] += 1
return exps
end
function all_exponents(set::Union{Sets.Ellipsoid,Sets.Piecewise{<:Any, <:Sets.Ellipsoid}})
return [ell_exponents(i, j, Sets.dimension(set)) for j in 1:Sets.dimension(set) for i in 1:j]
end
function all_exponents(set::Sets.Piecewise)
# TODO check that all polysets have same exponents
return all_exponents(set.sets[1])
end
function lin_exponents(i, n)
exps = zeros(Int, n)
exps[i] += 1
return exps
end
function all_exponents(set::Union{Sets.PolarPoint,Sets.Piecewise{<:Any,<:Sets.PolarPoint}})
return [lin_exponents(i, Sets.dimension(set)) for i in 1:Sets.dimension(set)]
end
function evaluate_monomials(monomial_value::Function,
set::Union{Sets.PolySet{T}, Sets.ConvexPolySet{T}}) where T
U = MA.promote_operation(*, Float64, T)
total = zero(MA.promote_operation(+, U, U))
for t in MP.terms(set.p)
total = MA.add_mul!!(total, monomial_value(MP.exponents(MP.monomial(t))), MP.coefficient(t))
end
return total
end
function evaluate_monomials(monomial_value::Function, set::Sets.Ellipsoid{T}) where T
U = MA.promote_operation(*, Float64, T)
total = zero(MA.promote_operation(+, U, U))
for j in 1:Sets.dimension(set)
for i in 1:Sets.dimension(set)
total = MA.add_mul!!(total, monomial_value(ell_exponents(i, j, Sets.dimension(set))), set.Q[i, j])
end
end
return total
end
function evaluate_monomials(monomial_value::Function, set::Sets.PolarPoint{T}) where T
U = MA.promote_operation(*, Float64, T)
total = zero(MA.promote_operation(+, U, U))
for i in 1:Sets.dimension(set)
total = MA.add_mul!!(total, monomial_value(lin_exponents(i, Sets.dimension(set))), set.a[i])
end
return total
end
function l1_integral(set::Sets.Piecewise{T, <:Union{Sets.PolarPoint{T}, Sets.Ellipsoid{T}, Sets.PolySet{T}}},
::Nothing) where T
decs = Decomposition[]
val = Dict(exps => length(push!(decs, decompose(exps)))
for exps in all_exponents(set))
cache = Dict{PowerOfLinearForm, Float64}()
U = MA.promote_operation(*, Float64, T)
total = zero(MA.promote_operation(+, U, U))
for (set, piece) in zip(set.sets, set.pieces)
# Some pieces might be empty cones, e.g., if a projection of a polar set is given.
hasallrays(piece) || continue
@assert !haslines(piece)
# `piece` is a cone, let's cut it with a halfspace
# We normalize as the norm of each ray is irrelevant
cut = normalize(sum(normalize ∘ Polyhedra.coord, rays(piece))) # Just a heuristic, open to better ideas
polytope = piece ∩ HalfSpace(cut, one(eltype(cut)))
total = MA.add!!(total, integrate_gauge_like(set, polytope, decs, val, cache))
end
return total
end
_polar(::Nothing) = nothing
function _polar(vertex::Vector)
◇ = typeof(vertex)[]
for i in eachindex(vertex)
v = zeros(eltype(vertex), length(vertex))
v[i] = 1 / vertex[i]
push!(◇, v)
v = zeros(eltype(vertex), length(vertex))
v[i] = -1 / vertex[i]
push!(◇, v)
end
return Polyhedra.polyhedron(Polyhedra.vrep(◇))
end
# The polar of the rectangle with vertices (-v, v) is not a rectangle but the
# smaller rectangle contained in it has vertices (-1 ./ v, 1 ./ v).
# the set is not necessarily inside this rectangle, we just know that it is
# outside the polar of the rectangle with vertices (-v, v). However, since it
# is homogeneous, only the ratios between the dimensions is important
function l1_integral(set::Sets.Polar, p::Polyhedra.Polyhedron)
return l1_integral(Polyhedra.polar(set), Polyhedra.polar(p))
end
function l1_integral(set::Sets.Polar, vertex)
return l1_integral(Polyhedra.polar(set), _polar(vertex))
end
function l1_integral(set::Sets.HouseDualOf{<:Sets.AbstractEllipsoid},
vertex)
return l1_integral(Polyhedra.polar(Sets.Ellipsoid(set.set.set.Q)),
vertex)
end
function l1_integral(set::Sets.HouseDualOf{<:Sets.ConvexPolynomialSet},
vertex)
return rectangle_integrate(MP.subs(set.set.set.q, set.set.set.z => 0),
1 ./ vertex)
end
function invert_objective_sense(::Union{Sets.Polar,
Sets.PerspectiveDual})
return false
end
function invert_objective_sense(::Union{Sets.Ellipsoid,
Sets.PolySet,
Sets.ConvexPolySet,
Sets.Piecewise})
return true
end
function objective_sense(model::JuMP.Model, l::L1Heuristic)
sense = data(model).objective_sense
if invert_objective_sense(variablify(l.set))
if sense == MOI.MAX_SENSE
return MOI.MIN_SENSE
elseif sense == MOI.MIN_SENSE
return MOI.MAX_SENSE
else
error("Unsupported objective sense $sense for `L1_heuristic`.")
end
else
return sense
end
end
function objective_function(::JuMP.Model, l::L1Heuristic)
return l1_integral(variablify(l.set), l.support)
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 830 | module SetProg
using LinearAlgebra
include("Sets/Sets.jl")
import MutableArithmetics
const MA = MutableArithmetics
import Reexport
Reexport.@reexport using MultivariateBases
Reexport.@reexport using JuMP
const MOI = JuMP.MOI
Reexport.@reexport using Polyhedra
using SumOfSquares
using MultivariateMoments
import DynamicPolynomials
import MultivariatePolynomials as MP
const SpaceVariable = Sets.SpaceVariable
export Polytope, Ellipsoid, PolySet
export nth_root, L1_heuristic
@enum Space Undecided PrimalSpace DualSpace
include("utilities.jl")
include("spaces.jl")
include("macros.jl") # need to be before `variables.jl` and `constraints.jl` as they use `⊆` in `@constraint`
include("variables.jl")
include("map.jl")
include("constraints.jl")
include("objective.jl")
include("data.jl")
include("optimize.jl")
end # module
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 1096 | struct AffineTerm{T, F}
coefficient::T
func::F
end
struct AffineExpression{T, F <: AbstractScalarFunction} <: AbstractScalarFunction
terms::Vector{AffineTerm{T, F}}
constant::T
end
function Base.:(+)(f::F, g::F) where F <: AbstractScalarFunction
return AffineExpression([AffineTerm(1.0, f), AffineTerm(1.0, g)], 0.0)
end
function Base.:(+)(a::AffineExpression{T, F}, f::F) where {T, F <: AbstractScalarFunction}
return AffineExpression([a.terms; AffineTerm(1.0, f)], a.constant)
end
function Base.:(+)(f::F, a::AffineExpression{T, F}) where {T, F <: AbstractScalarFunction}
return a + f
end
function objective_sense(model::JuMP.Model, f::AffineExpression)
sense = objective_sense(model, f.terms[1])
for i in 2:length(f.terms)
@assert sense == objective_sense(model, f.terms[i])
end
return sense
end
function objective_function(model::JuMP.Model, aff::AffineExpression)
obj = convert(JuMP.AffExpr, aff.constant)
for t in aff.terms
obj = MA.add_mul!!(obj, t.coefficient, objective_function(t.func))
end
return obj
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 1913 | abstract type SetConstraint <: JuMP.AbstractConstraint end
## Loading ##
function need_variablify(set::Union{Polyhedra.Rep, Polyhedra.RepElement,
Sets.AbstractSet})
return false
end
function variablify(set::Union{Polyhedra.Rep, Polyhedra.RepElement,
Sets.AbstractSet})
return set
end
need_variablify(v::SetVariableRef) = true
variablify(v::SetVariableRef) = v.variable
function load(model::JuMP.Model, constraint::SetConstraint)
@assert need_variablify(constraint)
return JuMP.add_constraint(model, variablify(constraint))
end
## Adding ##
struct SetShape <: JuMP.AbstractShape end
struct ConstraintIndex
value::Int
end
function JuMP.add_constraint(model::JuMP.Model, constraint::SetConstraint,
name::String="")
d = data(model)
if d.state != Modeling
error("Constraint of type $(typeof(constraint)) not supported yet")
end
@assert need_variablify(constraint)
index = ConstraintIndex(d.last_index += 1)
d.constraints[index] = constraint
d.names[index] = name
return JuMP.ConstraintRef(model, index, SetShape())
end
### SetConstraintRef ###
const SetConstraintRef{M} = JuMP.ConstraintRef{M, ConstraintIndex, SetShape}
function JuMP.name(cref::SetConstraintRef)
return data(cref.model).names[cref.index]
end
function JuMP.constraint_object(cref::SetConstraintRef)
return data(cref.model).constraints[cref.index]
end
function transformed_constraint(cref::SetConstraintRef)
return data(cref.model).transformed_constraints[cref.index]
end
function JuMP.dual(cref::SetConstraintRef)
return JuMP.dual(transformed_constraint(cref))
end
function SumOfSquares.moment_matrix(cref::SetConstraintRef)
return SumOfSquares.moment_matrix(transformed_constraint(cref))
end
include("membership.jl")
include("inclusion.jl")
include("tangent.jl")
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 3882 | # Avoid variables, constraints and objective being added when loading
@enum State Loading Modeling
mutable struct Data
variables::Set{SetVariableRef}
constraints::Dict{ConstraintIndex, SetConstraint}
transformed_constraints::Dict{ConstraintIndex, JuMP.ConstraintRef}
names::Dict{ConstraintIndex, String}
last_index::Int
objective_sense::MOI.OptimizationSense
objective::Union{Nothing, AbstractScalarFunction}
objective_variable::Union{Nothing, JuMP.VariableRef}
perspective_polyvar::Union{Nothing, SpaceVariable}
space::Space
state::State
spaces::Union{Nothing, Spaces}
end
function JuMP.copy_extension_data(::SetProg.Data, ::JuMP.Model, ::JuMP.Model)
end
function lift_space_variables(data::Data,
x::Vector{SpaceVariable})
@assert data.perspective_polyvar > x[1]
SpaceVariable[data.perspective_polyvar; x]
end
function set_space(cur::Space, space::Space)
if space == Undecided || cur == Undecided || cur == space
return space
else
error("Incompatible constraints/objective, some require to do the modeling in the primal space and some in the dual space.")
end
end
function set_space(d::Data, model::JuMP.Model)
space = Undecided
for (index, constraint) in d.constraints
space = set_space(space, constraint)
end
if d.objective !== nothing
space = set_space(space, d.objective, model)
end
if space == Undecided
space = PrimalSpace
end
d.space = space
end
# Remove spaces creating in earlier `optimize!`
function clear_spaces(d::Data)
for vref in d.variables
clear_spaces(vref)
end
for (index, constraint) in d.constraints
clear_spaces(constraint)
end
end
# No space index stored in constants
function clear_spaces(::Union{Polyhedra.Rep, Sets.AbstractSet}) end
# No perspective variable
function Sets.perspective_variable(::Polyhedra.Rep) end
synchronize_perspective(::Nothing, ::Nothing) = nothing
synchronize_perspective(::Nothing, z::SpaceVariable) = z
synchronize_perspective(z::SpaceVariable, ::Nothing) = z
function synchronize_perspective(z1::SpaceVariable, z2::SpaceVariable)
if z1 !== z2
throw(ArgumentError("Perspective variables do not match"))
end
return z1
end
function synchronize_perspective(d::Data)
z = nothing
for (index, constraint) in d.constraints
z = synchronize_perspective(z, Sets.perspective_variable(constraint))
end
if z === nothing
DynamicPolynomials.@polyvar z
end
d.perspective_polyvar = z
end
function create_spaces(set::Polyhedra.Rep, spaces::Spaces)
return new_space(spaces, Polyhedra.fulldim(set))
end
function create_spaces(set::Union{Sets.AbstractSet, Sets.Projection}, spaces::Spaces)
polyvars = Sets.space_variables(set)
if polyvars === nothing
return new_space(spaces, Sets.dimension(set))
else
return new_space(spaces, polyvars)
end
end
# Create spaces and identify objects lying the the same space
function create_spaces(d::Data)
spaces = Spaces()
for vref in d.variables
create_spaces(vref, spaces)
end
for (index, constraint) in d.constraints
create_spaces(constraint, spaces)
end
d.spaces = spaces
end
function data(model::JuMP.Model)
if !haskey(model.ext, :SetProg)
model.ext[:SetProg] = Data(Set{SetVariableRef}(),
Dict{ConstraintIndex, SetConstraint}(),
Dict{ConstraintIndex, JuMP.ConstraintRef}(),
Dict{ConstraintIndex, String}(), 0,
MOI.FEASIBILITY_SENSE, nothing, nothing,
nothing, Undecided, Modeling, nothing)
model.optimize_hook = optimize_hook
end
return model.ext[:SetProg]
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 1105 | function poly_eval(p::MP.AbstractPolynomial{JuMP.AffExpr},
a::AbstractVector{Float64})
vars = MP.variables(p)
aff = zero(JuMP.AffExpr)
for term in MP.terms(p)
mono = MP.monomial(term)
aff = MA.add_mul!!(aff, mono(vars => a), MP.coefficient(term))
end
return aff
end
function sublevel_eval(ell::Sets.Ellipsoid,
a::AbstractVector)
return quad_form(ell.Q, a)
end
function sublevel_eval(set::Union{Sets.PolySet,
Sets.ConvexPolySet},
a::AbstractVector)
return poly_eval(MP.polynomial(set.p), a)
end
function sublevel_eval(set::Sets.Householder, a::AbstractVector, β)
x = Sets.space_variables(set)
z = Sets.perspective_variable(set)
# Avoid large values, with high degree polynomials, it might cause issues
if false
scale_a = norm(a)
scale_β = abs(β)
scaling = max(scale_a, scale_β) / sqrt(scale_a * scale_β)
else
scaling = 1.0
end
return Sets.perspective_gauge0(set)(z => β / scaling, x => a / scaling)
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 18552 | ### InclusionConstraint ###
struct InclusionConstraint{SubSetType, SupSetType, KWT} <: SetConstraint
subset::SubSetType
supset::SupSetType
kws::KWT
end
function need_variablify(c::InclusionConstraint)
return need_variablify(c.subset) || need_variablify(c.supset)
end
function variablify(c::InclusionConstraint)
return JuMP.build_constraint(error, variablify(c.subset),
PowerSet(variablify(c.supset)); c.kws...)
end
function clear_spaces(c::InclusionConstraint)
clear_spaces(c.subset)
clear_spaces(c.supset)
end
function Sets.perspective_variable(c::InclusionConstraint)
return synchronize_perspective(Sets.perspective_variable(c.subset),
Sets.perspective_variable(c.supset))
end
function create_spaces(c::InclusionConstraint, spaces::Spaces)
sub = create_spaces(c.subset, spaces)
sup = create_spaces(c.supset, spaces)
return merge_spaces(spaces, sub, sup)
end
JuMP.function_string(print_mode, c::InclusionConstraint) = string(c.subset)
function JuMP.in_set_string(print_mode, c::InclusionConstraint)
string(print_mode == MIME("text/latex") ? "\\subseteq" : "⊆", " ", c.supset)
end
struct PowerSet{S}
set::S
end
# Fallback, might be because `subset` or `sup_powerset` is a `SetVariableRef` or
# a `Polyhedron` (which is handled by `JuMP.add_constraint`).
function JuMP.build_constraint(_error::Function, subset, sup_powerset::PowerSet;
kws...)
InclusionConstraint(subset, sup_powerset.set, kws)
end
### InclusionConstraint for sets ###
## Set in Set ##
# See [LTJ18] B. Legat, P. Tabuada and R. M. Jungers.
# *Computing controlled invariant sets for hybrid systems with applications to model-predictive control*.
# 6th IFAC Conference on Analysis and Design of Hybrid Systems ADHS 2018, **2018**.
function set_space(space::Space, ::InclusionConstraint{<:Sets.LinearImage,
<:Sets.LinearImage})
return set_space(space, DualSpace)
end
# AS ⊆ S <=> S ⊆ A^{-1}S so PrimalSpace work
# AS ⊆ S <=> A^{-T}S∘ ⊆ S∘ so DualSpace work
function set_space(space::Space, ::InclusionConstraint{<:Sets.LinearImage,
<:SetVariableRef})
return space
end
# We can always transform an ellipsoid to primal or dual space so we can handle
# any space
function set_space(space::Space,
::InclusionConstraint{<:SetVariableRef,
<:Sets.AbstractEllipsoid{T}}) where T<:Number
return space
end
# S-procedure: Q ⊆ P <=> xQx ≤ 1 => xPx ≤ 1 <=> xPx ≤ xQx <=> Q - P is PSD
function JuMP.build_constraint(_error::Function,
subset::Sets.Ellipsoid,
sup_powerset::PowerSet{<:Sets.Ellipsoid};
S_procedure_scaling = nothing)
@assert S_procedure_scaling === nothing || isone(S_procedure_scaling)
Q = subset.Q
P = sup_powerset.set.Q
return psd_constraint(Symmetric(Q - P))
end
# S-procedure: Q ⊆ P <=> q(x) ≤ 1 => p(x) ≤ 1 <=> p(x) ≤ q(x) <= q - p is SOS
function JuMP.build_constraint(_error::Function,
subset::Union{Sets.PolySet,
Sets.ConvexPolySet},
sup_powerset::PowerSet{<:Union{Sets.PolySet,
Sets.ConvexPolySet}};
S_procedure_scaling = nothing)
@assert S_procedure_scaling === nothing || isone(S_procedure_scaling)
q = subset.p
p = sup_powerset.set.p
JuMP.build_constraint(_error, q - p, SOSCone())
end
# S-procedure: Q ⊆ P <=> q - p is SOS
function s_procedure(model, subset, supset; S_procedure_scaling = nothing)
q = subset.p
p = supset.p
if S_procedure_scaling === nothing
S_procedure_scaling = @variable(model)
# We want to avoid creating a non-convex problem. If one of `p` and `q`
# is a polynomial with constant coefficients, we multiply the variable
# by this one
if MP.coefficient_type(q) <: Number
s = S_procedure_scaling * q - p
else
s = q - S_procedure_scaling * p
end
else
s = q - S_procedure_scaling * p
end
return JuMP.build_constraint(error, s, SOSCone())
end
# S-procedure: Q ⊆ P <=> q - p is SOS
function JuMP.add_constraint(model::JuMP.Model,
constraint::InclusionConstraint{<:Sets.Householder,
<:Sets.Householder},
name::String = "")
return JuMP.add_constraint(model,
s_procedure(model, constraint.subset,
constraint.supset;
constraint.kws...), name)
end
_add_constraint_or_not(model, ::Nothing) = nothing
_add_constraint_or_not(model, con) = JuMP.add_constraint(model, con)
function _preprocess_domain(domain)
# TODO `detecthlinearity!` fails to ignore `[1e-17, 0]` coming from
# `zero_eliminate` of `[1e-17, 0, 1]` so we remove duplicates to drop it
domain = polyhedron(
removeduplicates(hrep(domain), Polyhedra.default_solver(domain)),
library(domain)
)
detecthlinearity!(domain)
return domain
end
function _linear_part(model, domain)
Λ = [zero(JuMP.AffExpr) for i in 1:fulldim(domain)]
for (i, h) in enumerate(halfspaces(domain))
iszero(h.β) || error("only cones are supported")
λ = @variable(model, lower_bound = 0.0, base_name = "λ[$i]")
Λ = MA.broadcast!(MA.add_mul, Λ, λ, h.a)
end
return Λ
end
function lin_in_domain(model, h::Vector, domain::Polyhedra.HRep)
domain = _preprocess_domain(domain)
dim(domain) <= 0 && return
a = h + _linear_part(model, domain)
if hashyperplanes(domain)
# If we are in lower dimension,
# we can reduce the size of `a` so that the linear constraint has smaller size.
V = _linspace(domain)
a = V' * a
end
return JuMP.build_constraint(error, a, MOI.Zeros(length(a)))
end
function add_constraint_inclusion_domain(
model::JuMP.Model,
subset::Sets.PolarPoint,
supset::Sets.PolarPoint,
domain::Polyhedra.Polyhedron)
return _add_constraint_or_not(model, lin_in_domain(model, (subset.a - supset.a), domain))
end
function _quadratic_part(model, domain)
Λ = [zero(JuMP.AffExpr) for i in 1:fulldim(domain), j in 1:fulldim(domain)]
for (i, hi) in enumerate(halfspaces(domain))
for (j, hj) in enumerate(halfspaces(domain))
i <= j && break
(iszero(hi.β) && iszero(hj.β)) || error("only cones are supported")
A = hi.a * hj.a' + hj.a * hi.a'
λ = @variable(model, lower_bound = 0.0, base_name = "λ[$i,$j]")
Λ = MA.broadcast!(MA.add_mul, Λ, λ, A)
end
end
return Λ
end
function _linspace(domain)
L = Matrix{Polyhedra.coefficient_type(domain)}(undef, nhyperplanes(domain), fulldim(domain))
for (i, h) in enumerate(hyperplanes(domain))
iszero(h.β) || error("only cones are supported")
L[i, :] = h.a
end
return LinearAlgebra.nullspace(L)
end
function psd_in_domain(model, Q::Symmetric, domain::Polyhedra.HRep)
domain = _preprocess_domain(domain)
dim(domain) <= 0 && return
A = Q - _quadratic_part(model, domain)
if hashyperplanes(domain)
# If we are in lower dimension,
# we can reduce the size of `A` so that the PSD constraint has smaller size.
V = _linspace(domain)
A = V' * A * V
end
return psd_constraint(Symmetric(A))
end
function lifted_psd_in_domain(model, Q::Symmetric, domain)
# TODO `detecthlinearity!` fails to ignore `[1e-17, 0]` coming from
# `zero_eliminate` of `[1e-17, 0, 1]` so we remove duplicates to drop it
domain = polyhedron(
removeduplicates(hrep(domain), Polyhedra.default_solver(domain)),
library(domain)
)
detecthlinearity!(domain)
dim(domain) <= 0 && return
Λ = [zero(JuMP.AffExpr) for i in 1:fulldim(domain)]
for (i, hi) in enumerate(halfspaces(domain))
λ = @variable(model, lower_bound = 0.0, base_name = "λ[$i]")
Λ = MA.broadcast!(MA.add_mul, Λ, λ, hi.a)
end
off = Q[2:end, 1] - Λ
A = [Q[1, 1] off'
off Q[2:end, 2:end] - _quadratic_part(model, domain)]
if hashyperplanes(domain)
V = _linspace(domain)
V = [1 zeros(1, size(V, 2))
zeros(size(V, 1), 1) V]
A = V' * A * V
end
return psd_constraint(Symmetric(A))
end
function add_constraint_inclusion_domain(
model::JuMP.Model,
subset::Sets.Ellipsoid,
supset::Sets.Ellipsoid,
domain::Polyhedra.Polyhedron)
return _add_constraint_or_not(model, psd_in_domain(model, Symmetric(subset.Q - supset.Q), domain))
end
function JuMP.add_constraint(model::JuMP.Model,
constraint::InclusionConstraint{<:Sets.Piecewise,
<:Sets.Piecewise},
name::String = "")
subset = constraint.subset
supset = constraint.supset
for (i, si) in enumerate(subset.sets)
for (j, sj) in enumerate(supset.sets)
add_constraint_inclusion_domain(model, si, sj, subset.pieces[i] ∩ supset.pieces[j])
end
end
end
# S ⊆ T <=> T* ⊇ S*
function JuMP.build_constraint(_error::Function,
subset::Sets.HouseDualOf,
sup_powerset::PowerSet{<:Sets.HouseDualOf};
kws...)
S = subset
T = sup_powerset.set
JuMP.build_constraint(_error, Sets.perspective_dual(T), PowerSet(Sets.perspective_dual(S)); kws...)
end
# S ⊆ T <=> polar(T) ⊆ polar(S)
function JuMP.build_constraint(_error::Function,
subset::Sets.Polar,
sup_powerset::PowerSet{<:Sets.Polar}; kws...)
S = subset
T = sup_powerset.set
JuMP.build_constraint(_error, Polyhedra.polar(T), PowerSet(Polyhedra.polar(S));
kws...)
end
# See [LTJ18]
function JuMP.build_constraint(_error::Function,
subset::Sets.LinearImage{<:Union{Sets.Polar, Sets.PerspectiveDual}},
sup_powerset::PowerSet{<:Sets.LinearImage{<:Union{Sets.Polar, Sets.PerspectiveDual}}};
kws...)
dim = Sets.dimension(subset)
DynamicPolynomials.@polyvar x[1:dim]
JuMP.build_constraint(_error, apply_map(subset, x),
PowerSet(apply_map(sup_powerset.set, x)); kws...)
end
function JuMP.build_constraint(_error::Function,
subset::Sets.AbstractSet,
sup_powerset::PowerSet{<:Sets.LinearPreImage{<:Sets.AbstractSet}};
kws...)
x = Sets.space_variables(subset)
JuMP.build_constraint(_error, subset,
PowerSet(apply_map(sup_powerset.set, x)); kws...)
end
function JuMP.build_constraint(_error::Function,
subset::Sets.LinearImage{<:Sets.AbstractSet},
sup_powerset::PowerSet{<:Sets.AbstractSet};
kws...)
JuMP.build_constraint(
_error, subset.set,
PowerSet(Sets.LinearPreImage(sup_powerset.set, subset.A)); kws...)
end
function JuMP.build_constraint(_error::Function,
subset::Sets.LinearImage{<:Union{Sets.Polar, Sets.PerspectiveDual}},
sup_powerset::PowerSet{<:Sets.AbstractSet};
kws...)
x = Sets.space_variables(sup_powerset.set)
JuMP.build_constraint(_error, apply_map(subset, x), sup_powerset; kws...)
end
## Set in Polyhedron ##
function set_space(space::Space, ::InclusionConstraint{<:SetVariableRef,
<:Polyhedra.Rep})
return set_space(space, DualSpace)
end
# Ellipsoid #
function JuMP.add_constraint(model::JuMP.Model,
constraint::InclusionConstraint{<:Sets.AbstractSet,
<:Polyhedra.Rep},
name::String = "")
◯ = constraint.subset
□ = constraint.supset
for hp in hyperplanes(□)
@constraint(model, ◯ ⊆ hp)
end
for hs in halfspaces(□)
@constraint(model, ◯ ⊆ hs)
end
end
# S∘ ⊆ [⟨a, x⟩ ≤ β]
# S∘ ⊆ [⟨a/β, x⟩ ≤ 1]
# a/β ∈ S
function JuMP.build_constraint(_error::Function, subset::Sets.Polar,
sup_powerset::PowerSet{<:Polyhedra.HyperPlane})
@assert iszero(sup_powerset.set.β) # Otherwise it is not symmetric around the origin
JuMP.build_constraint(_error, Line(sup_powerset.set.a),
Polyhedra.polar(subset))
end
function JuMP.build_constraint(_error::Function, subset::Sets.Polar,
sup_powerset::PowerSet{<:Polyhedra.HalfSpace})
JuMP.build_constraint(_error, ScaledPoint(sup_powerset.set.a, sup_powerset.set.β), Polyhedra.polar(subset))
end
# τ^{-1}(τ(S)*) ⊆ [⟨a, x⟩ ≤ β]
# τ(S)* ⊆ [⟨(β, -a), (z, x)⟩ ≥ 0]
# (β, -a) ∈ τ(S)
function JuMP.build_constraint(_error::Function, subset::Sets.PerspectiveDual,
sup_powerset::PowerSet{<:Polyhedra.HyperPlane})
JuMP.build_constraint(_error, SymScaledPoint(-sup_powerset.set.a, sup_powerset.set.β), Polyhedra.polar(subset))
end
function JuMP.build_constraint(_error::Function, subset::Sets.PerspectiveDual,
sup_powerset::PowerSet{<:Polyhedra.HalfSpace})
JuMP.build_constraint(_error, ScaledPoint(-sup_powerset.set.a, sup_powerset.set.β), Sets.perspective_dual(subset))
end
## Polyhedron in Set ##
function set_space(space::Space, ::InclusionConstraint{<:Polyhedra.Rep,
<:SetVariableRef})
return set_space(space, PrimalSpace)
end
function JuMP.add_constraint(
model::JuMP.Model,
constraint::InclusionConstraint{
<:Polyhedra.VRep,
<:Union{Sets.AbstractSet{<:JuMP.AbstractJuMPScalar},
Polyhedra.HRepElement{<:JuMP.AbstractJuMPScalar}}},
::String = ""
)
□ = constraint.subset
◯ = constraint.supset
for line in lines(□)
@constraint(model, line in ◯)
end
for ray in rays(□)
@constraint(model, ray in ◯)
end
for point in points(□)
@constraint(model, point in ◯)
end
end
# Approach 1:
# [x'Qx ≤ 1] ⊆ [⟨a, x⟩ ≤ β]
# <=> [x'Qx ≤ 1] ⊆ [⟨a/β, x⟩ ≤ 1]
# <=> [x'Qx ≤ 1] ⊆ [(⟨a/β, x⟩)^2 ≤ 1]
# <=> [x'Qx ≤ 1] ⊆ [x'(aa' / β^2)x ≤ 1]
# <=> Q ⪰ aa' / β^2
# Schur's Lemma
# [β^2 a']
# [a Q ]
# Approach 2:
# [x'Qx ≤ 1] ⊆ [⟨a, x⟩ ≤ β]
# <=> [x'Qx ≤ 1] ⊆ [⟨-a, x⟩ ≤ β]
# <=> [x'Qx - z^2 ≤ 0] ⊆ [⟨-a, x⟩z - βz^2 ≤ 0]
# <=> λx'Qx - λz^2 ≥ ⟨-a, x⟩z - βz^2
# <=> λx'Qx - λz^2 ≥ ⟨-a, x⟩z - βz^2
# <=> λx'Qx + ⟨a, x⟩z + (β - λ) z^2 ≥ 0
# [β-λ a'/2]
# [a/2 λQ ]
# TODO how to prove that they are equivalent ?
# Approach 1 is better if β is constant
# Approach 2 is better if Q is constant
_unstable_constantify(x) = x
function _unstable_constantify(x::JuMP.GenericAffExpr)
return isempty(JuMP.linear_terms(x)) ? x.constant : x
end
function _psd_matrix(subset::Sets.Ellipsoid, hs::HalfSpace)
return Symmetric([
_unstable_constantify(hs.β)^2 hs.a'
hs.a subset.Q
])
end
function JuMP.build_constraint(
::Function,
subset::Sets.Ellipsoid,
sup_powerset::PowerSet{<:HalfSpace}
)
return psd_constraint(Symmetric(_psd_matrix(subset, sup_powerset.set)))
end
function add_constraint_inclusion_domain(
model::JuMP.Model,
subset::Sets.Ellipsoid,
supset::HalfSpace,
domain
)
return _add_constraint_or_not(
model,
lifted_psd_in_domain(model, _psd_matrix(subset, supset), domain)
)
end
function JuMP.add_constraint(
model::JuMP.Model,
constraint::InclusionConstraint{
<:Sets.Piecewise,
<:HalfSpace},
name::String = ""
)
subset = constraint.subset
supset = constraint.supset
for (piece, set) in zip(subset.pieces, subset.sets)
add_constraint_inclusion_domain(model, set, constraint.supset, piece)
end
end
# [p(x) ≤ 1] ⊆ [⟨a, x⟩ ≤ β]
# <= λ(1 - p(x)) ≤ β - ⟨a, x⟩ (necessary if p is SOS-convex)
# 0 ≤ λ(p(x) - 1) - ⟨a, x⟩ + β
# Set `x = 0`: 0 ≤ λ(p(0) - 1) + β
# hence if p(0) ≤ 1 (i.e. 0 ∈ S), then λ ≤ β / (1 - p(0))
# Homogeneous case: λ ≤ β
# Use build_constraint when SumOfSquares#66 if λ = β (e.g. homogeneous)
function JuMP.add_constraint(model::JuMP.Model,
constraint::InclusionConstraint{<:Union{Sets.ConvexPolySet,
Sets.ConvexPolynomialSet},
<:HalfSpace},
name::String = "")
p = Sets.gauge1(constraint.subset)
h = constraint.supset
x = Sets.space_variables(constraint.subset)
hs = dot(h.a, x) - h.β
if MP.coefficient_type(p) <: Number
λ = @variable(model, lower_bound=0.0, base_name = "λ")
cref = @constraint(model, λ * (p - 1) - hs in SOSCone())
elseif Polyhedra.coefficient_type(h) <: Number
λ = @variable(model, lower_bound=0.0, base_name = "λ")
cref = @constraint(model, (p - 1) - λ * hs in SOSCone())
else
β = _unstable_constantify(h.β)
if β isa Number
# TODO what is a good value for β ???
λ = β/2
else
λ = @variable(model, lower_bound=0.0, base_name = "λ")
end
cref = @constraint(model, λ * (p - 1) - hs in SOSCone())
end
return cref
end
function JuMP.build_constraint(_error::Function,
subset::Sets.Householder,
sup_powerset::PowerSet{<:HalfSpace})
# 0 ≤ βz + ⟨-a, x⟩
x = [sup_powerset.set.β; -sup_powerset.set.a]
H = Sets._householder(subset.h)
# The householder transformation is symmetric and orthogonal so no need to
# worry about whether we should invert or transpose H
y = H * x
JuMP.build_constraint(_error, subset.set,
PowerSet(HalfSpace(y[2:end], -y[1])))
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 1038 | function JuMP.parse_constraint_call(_error::Function, vectorized::Bool,
::Val{:⊂}, lhs, rhs)
_error("Unrecognized symbol ⊂ you mean ⊆ ?")
end
function JuMP.parse_constraint_call(_error::Function, vectorized::Bool,
::Val{:⊆}, lhs, rhs)
parse_code = :()
if vectorized
build_call = :(JuMP.build_constraint.($_error, $(esc(lhs)), $(esc(:(SetProg.PowerSet.($rhs))))))
else
build_call = :(JuMP.build_constraint($_error, $(esc(lhs)), $(esc(:(SetProg.PowerSet($rhs))))))
end
return parse_code, build_call
end
function JuMP.parse_constraint_call(_error::Function, vectorized::Bool,
::Val{:⊃}, lhs, rhs)
_error("Unrecognized symbol ⊃, did you mean ⊇ ?")
end
function JuMP.parse_constraint_call(_error::Function, vectorized::Bool,
::Val{:⊇}, lhs, rhs)
parse_one_operator_constraint(_error, vectorized, Val(:⊆), rhs, lhs)
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 3161 | need_variablify(p::Sets.Projection) = need_variablify(p.set)
Sets.perspective_variable(p::Sets.Projection) = Sets.perspective_variable(p.set)
clear_spaces(p::Sets.Projection) = clear_spaces(p.set)
need_variablify(lm::Sets.LinearImage) = need_variablify(lm.set)
function variablify(lm::Sets.LinearImage)
return Sets.LinearImage(variablify(lm.set), lm.A)
end
"""
apply_map(li::Sets.LinearImage{<:Sets.Polar}, new_vars)
The set ``(AS)^\\circ``, the polar of the set ``AS``, is ``A^{-\\top}S^\\circ``.
"""
function apply_map(li::Sets.LinearImage{<:Sets.Polar}, new_vars)
return Sets.polar(apply_map(Sets.LinearPreImage(Sets.polar(li.set), li.A'), new_vars))
end
function apply_map(li::Sets.LinearPreImage{<:Sets.PolarPoint}, new_vars)
return Sets.PolarPoint(li.A' * li.set.a)
end
"""
apply_map(li::Sets.LinearPreImage{<:Sets.Ellipsoid}, new_vars)
Given ``S = \\{\\, x \\mid x^\\top Q x \\le 1\\,\\}``, we have
``A^{-1}S = \\{\\, x \\mid x^\\top A^\\top Q A x \\le 1\\,\\}``.
"""
function apply_map(li::Sets.LinearPreImage{<:Sets.Ellipsoid}, new_vars)
return Sets.Ellipsoid(Symmetric(li.A' * li.set.Q * li.A))
end
function apply_map(li::Sets.LinearPreImage{<:Sets.Piecewise}, new_vars)
return Sets.Piecewise(
[apply_map(Sets.LinearPreImage(set, li.A), new_vars) for set in li.set.sets],
li.A \ li.set.polytope,
[li.A \ piece for piece in li.set.pieces],
li.set.graph # /!\ FIXME The nij are now incorrect
)
return Sets.Ellipsoid(Symmetric(li.A' * li.set.Q * li.A))
end
"""
apply_map(li::Sets.LinearPreImage{<:Sets.PolySet}, new_vars)
Given ``S = \\{\\, x \\mid p(x) \\le 1\\,\\}``, we have
``A^{-1}S = \\{\\, x \\mid p(Ax) \\le 1\\,\\}``.
"""
function apply_map(li::Sets.LinearPreImage{<:Sets.PolySet}, new_vars)
deg = li.set.degree
@assert iseven(deg)
q = apply_matrix(li.set.p, li.A, new_vars, div(deg, 2))
return Sets.PolySet(deg, q)
end
function apply_map(li::Sets.LinearPreImage{<:Sets.ConvexPolySet}, new_vars)
deg = li.set.degree
@assert iseven(deg)
q = apply_matrix(li.set.p, li.A, new_vars, div(deg, 2))
return Sets.ConvexPolySet(deg, q, nothing)
end
"""
apply_map(li::Sets.LinearImage{<:Sets.PerspectiveDualOf{<:Union{Sets.PerspectiveEllipsoid,
Sets.PerspectiveConvexPolynomialSet}}}, new_vars)
The set ``(AS)^\\circ``, the polar of the set ``AS``, is ``A^{-\\top}S^\\circ``
and given ..., we have
...
"""
function apply_map(li::Sets.LinearImage{<:Sets.PerspectiveDualOf{<:Sets.Householder{T}}}, new_vars) where T
old_vars = Sets.space_variables(li.set)
p = MP.subs(Sets.perspective_gauge0(li.set.set), old_vars => li.A' * new_vars)
dual = Sets.Householder(Sets.UnknownSet{T}(), p, li.A * li.set.set.h,
Sets.perspective_variable(li.set), new_vars)
return Sets.perspective_dual(dual)
end
abstract type SymbolicVariable end
# FIXME, for Sets.AbstractSet, we should apply it directly
function Base.:(*)(A::AbstractMatrix, set::Union{SetVariableRef, Sets.AbstractSet, SymbolicVariable})
return Sets.LinearImage(set, A)
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 9812 | include("eval.jl")
### MembershipConstraint ###
struct ScaledPoint{T, S, P<:AbstractVector{T}}
coord::P
scaling::S
end
Base.copy(p::ScaledPoint) = ScaledPoint(p.coord, p.scaling)
struct SymScaledPoint{T, S, P<:AbstractVector{T}}
coord::P
scaling::S
end
Base.copy(p::SymScaledPoint) = SymScaledPoint(p.coord, p.scaling)
const Point{T} = Union{AbstractVector{T}, ScaledPoint{T}}
Polyhedra.coord(p::Union{ScaledPoint, SymScaledPoint}) = p.coord
scaling(::AbstractVector) = 1.0
scaling(p::Union{ScaledPoint, SymScaledPoint}) = p.scaling
Sets.perspective_variable(::Point) = nothing
clear_spaces(::Point) = nothing
create_spaces(point::Point, spaces::Spaces) = new_space(spaces, length(point))
"""
struct MembershipConstraint{P, S} <: SetConstraint
member::P
set::S
end
Constrain `member in set`.
"""
struct MembershipConstraint{P, S} <: SetConstraint
member::P
set::S
end
need_variablify(c::MembershipConstraint) = need_variablify(c.set)
function variablify(c::MembershipConstraint)
return JuMP.build_constraint(error, variablify(c.member), variablify(c.set))
end
JuMP.function_string(print_mode, c::MembershipConstraint) = string(c.member)
JuMP.in_set_string(print_mode, c::MembershipConstraint) = string(JuMP.math_symbol(print_mode, :in), c.set)
function JuMP.build_constraint(_error::Function, member,
set::Union{Sets.AbstractSet, Sets.Projection, SetVariableRef})
MembershipConstraint(member, set)
end
# TODO
set_space(space::Space, ::MembershipConstraint) = space
# a/β ∈ S∘
# S ⊆ [⟨a/β, x⟩ ≤ 1]
# S ⊆ [⟨a, x⟩ ≤ β]
function JuMP.build_constraint(_error::Function, member::Point,
set::Sets.Polar)
if set.set isa Sets.AbstractEllipsoid{<:Number}
# The `else` will produce an SDP which is less efficiently solved than
# a SOC
return JuMP.build_constraint(_error, member, Sets.ellipsoid(set))
else
return JuMP.build_constraint(_error, Polyhedra.polar(set),
PowerSet(HalfSpace(coord(member),
scaling(member))))
end
end
# a/β ∈ τ^{-1}(τ(S)*)
# (β, a) ∈ τ(S)*
# τ(S) ⊆ [⟨(β, a), (z, x)⟩ ≥ 0]
# S ⊆ [⟨-a, x⟩ ≤ β]
function JuMP.build_constraint(_error::Function, member::Point,
set::Sets.PerspectiveDual)
if set.set isa Union{Sets.AbstractEllipsoid{<:Number},
Sets.Householder{<:Number, <:Sets.AbstractEllipsoid{<:Number}}}
# The `else` will produce an SDP which is less efficiently solved than
# a SOC
return JuMP.build_constraint(_error, member, Sets.ellipsoid(set))
else
return JuMP.build_constraint(_error, Sets.perspective_dual(set),
PowerSet(HalfSpace(-coord(member),
scaling(member))))
end
end
function JuMP.build_constraint(_error::Function,
member::Point{<:Number},
set::Sets.Piecewise)
# We take `coord` and ignore the scaling as `piece` is a cone.
piece = findfirst(piece -> Polyhedra.coord(member) in piece, set.pieces)
JuMP.build_constraint(_error, member, set.sets[piece])
end
function JuMP.build_constraint(_error::Function,
member::Point{<:JuMP.AbstractJuMPScalar},
img::Sets.HyperSphere)
JuMP.build_constraint(_error, [scaling(member); coord(member)],
SecondOrderCone())
end
function JuMP.build_constraint(_error::Function,
member::Point,
img::Sets.LinearPreImage)
JuMP.build_constraint(_error, img.A * member, img.set)
end
function JuMP.build_constraint(_error::Function,
member::Point,
t::Sets.Translation)
JuMP.build_constraint(_error, member - t.c, t.set)
end
function JuMP.build_constraint(_error::Function,
member::Point{<:JuMP.AbstractJuMPScalar},
ell::Sets.Ellipsoid{<:Number})
# The eltype of Point is an expression of JuMP variables so we cannot
# compute (x-c)' * Q * (x-c) <= 1, we need to transform it to
# ||L * (x - c)||_2 <= 1
U, S, V = svd(ell.Q.data)
L = diagm(0 => sqrt.(S)) * V'
sphere = Sets.LinearPreImage(Sets.HyperSphere(size(L, 1)), L)
JuMP.build_constraint(_error, member, sphere)
end
function JuMP.build_constraint(_error::Function,
member::Point{<:JuMP.AbstractJuMPScalar},
set::Sets.AbstractEllipsoid{<:Number})
JuMP.build_constraint(_error, member, Sets.ellipsoid(set))
end
function JuMP.build_constraint(_error::Function,
member::Point{<:JuMP.AbstractJuMPScalar},
set::Sets.Ellipsoid{<:JuMP.AbstractJuMPScalar})
# The eltype of both `member` and `set` is an expression of JuMP variables
# so we cannot use the linear constraint `x' * Q * x <= 1` nor transform it
# to SOC. We need to use the SDP constraint:
# [ 1 x' ]
# [ x Q^{-1} ] ⪰ 0 so we switch to the polar representation
JuMP.build_constraint(_error, member, Polyhedra.polar_representation(set))
end
function JuMP.build_constraint(_error::Function,
member::Point{<:Number},
set::Union{Sets.Ellipsoid,
Sets.PolySet,
Sets.ConvexPolySet})
JuMP.build_constraint(_error, sublevel_eval(set, AbstractVector{Float64}(coord(member))),
MOI.LessThan(Float64(scaling(member)^2)))
end
function JuMP.build_constraint(_error::Function,
member::Polyhedra.Line,
set::Union{Sets.Ellipsoid,
Sets.PolySet,
Sets.ConvexPolySet})
# We must have (λl)^T Q (λl) ≤ 1 for all λ hence we must have l^T Q l ≤ 0
# As Q is positive definite, it means l^T Q l = 0
l = Polyhedra.coord(member)
JuMP.build_constraint(_error, sublevel_eval(set, l), MOI.EqualTo(0.0))
end
function JuMP.build_constraint(_error::Function,
member::Polyhedra.Ray,
set::Union{Sets.Ellipsoid,
Sets.PolySet,
Sets.ConvexPolySet})
# We must have (λl)^T Q (λl) ≤ 1 for all λ > 0 hence we must have l^T Q l ≤ 0
# As Q is positive definite, it means l^T Q l = 0
r = Polyhedra.coord(member)
JuMP.build_constraint(_error, sublevel_eval(set, r), MOI.EqualTo(0.0))
end
function JuMP.build_constraint(_error::Function,
member::Point{<:Number},
set::Sets.Householder)
p = member
val = sublevel_eval(set, coord(p), scaling(p))
JuMP.build_constraint(_error, val, MOI.LessThan(0.0))
end
function JuMP.build_constraint(_error::Function,
member::SymScaledPoint{<:Number},
set::Sets.Householder)
p = member
val = sublevel_eval(set, coord(p), scaling(p))
JuMP.build_constraint(_error, val, MOI.EqualThan(0.0))
end
# TODO simplify when https://github.com/JuliaOpt/SumOfSquares.jl/issues/3 is done
function _moment_matrix(f, monos)
# Takes into account that some monomials are the same so they should
# have the same value
d = Dict{typeof(exponents(first(monos))), typeof(f(1,1))}()
function element(i, j)
mono = monos[i] * monos[j]
exps = exponents(mono)
if !haskey(d, exps)
d[exps] = f(i, j)
end
return d[exps]
end
return MomentMatrix(element, monos)
end
function JuMP.add_constraint(
model::JuMP.Model,
constraint::SetProg.MembershipConstraint{
Vector{T}, SetProg.Sets.ConvexPolySet{Float64}},
name::String = "") where T
set = constraint.set
@assert iseven(set.degree)
monos = monomials(SetProg.Sets.space_variables(set), 0:div(set.degree, 2))
U = promote_type(T, Int, JuMP.VariableRef)
function variable(i, j)
if i == length(monos) && j == length(monos)
return one(U)
elseif i == length(monos) || j == length(monos)
return convert(U, constraint.member[length(monos) - min(i, j)])
else
return convert(U, JuMP.VariableRef(model))
end
end
ν = _moment_matrix(variable, monos)
@constraint(model, ν.Q.Q in MOI.PositiveSemidefiniteConeTriangle(length(monos)))
p = SetProg.Sets.gauge1(set)
scalar_product = dot(measure(ν), polynomial(p))
@constraint(model, scalar_product in MOI.LessThan(1.0))
end
function JuMP.build_constraint(_error::Function,
member::Point,
h::Sets.PolarPoint)
r = Polyhedra.coord(member)
JuMP.build_constraint(_error, r'h.a - scaling(member), MOI.LessThan(0.0))
end
function JuMP.build_constraint(_error::Function,
member::Point,
h::Polyhedra.HalfSpace{AffExpr})
r = Polyhedra.coord(member)
JuMP.build_constraint(_error, r'h.a - scaling(member) * h.β, MOI.LessThan(0.0))
end
function JuMP.build_constraint(_error::Function,
member::Polyhedra.Ray,
h::Polyhedra.HalfSpace)
r = Polyhedra.coord(member)
JuMP.build_constraint(_error, r'h.a, MOI.LessThan(0.0))
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 2588 | struct RootVolume{S} <: AbstractScalarFunction
set::S
end
Base.copy(rv::RootVolume) = rv
nth_root(volume::Volume) = RootVolume(volume.set)
Base.show(io::IO, rv::RootVolume) = print(io, "volume^(1/n)(", rv.set, ")")
# Primal:
# set : x^T Q x ≤ 1
# volume proportional to 1/det(Q)
# t ≤ det(Q)^(1/n) <=> 1/t^n ≥ 1/det(Q)
# volume proportional to 1/t^n
# For t ≤ det(Q)^(1/n) to be tight we need to maximize `t`
# hence we need to minimize the volume
# Dual:
# set : x^T Q^{-1} x ≤ 1
# volume proportional to det(Q)
# t ≤ det(Q)^(1/n) <=> t^n ≥ det(Q)
# volume proportional to t^n
# For t ≤ det(Q)^(1/n) to be tight we need to maximize `t`
# hence we need to maximize the volume
function set_space(space::Space, rv::RootVolume, model::JuMP.Model)
if rv.set isa Ellipsoid
rv.set.guaranteed_psd = true
end
sense = data(model).objective_sense
if sense == MOI.MIN_SENSE
return set_space(space, PrimalSpace)
else
# The sense cannot be FEASIBILITY_SENSE since the objective function is
# not nothing
@assert sense == MOI.MAX_SENSE
return set_space(space, DualSpace)
end
end
function ellipsoid_root_volume(model::JuMP.Model, Q::AbstractMatrix)
n = LinearAlgebra.checksquare(Q)
t = @variable(model, base_name="t")
upper_tri = [Q[i, j] for j in 1:n for i in 1:j]
@constraint(model, [t; upper_tri] in MOI.RootDetConeTriangle(n))
return t
end
function root_volume(model::JuMP.Model, ell::Union{Sets.PolarOrNot{<:Sets.Ellipsoid},
Sets.HouseDualOf{<:Sets.AbstractEllipsoid}})
return ellipsoid_root_volume(model, Sets.convexity_proof(ell))
end
"""
root_volume(model::JuMP.Model,
set::Sets.PolarOrNot{<:Sets.ConvexPolySet})
Section IV.A of [MLB05].
[MLB05] A. Magnani, S. Lall and S. Boyd.
*Tractable fitting with convex polynomials via sum-of-squares*.
Proceedings of the 44th IEEE Conference on Decision and Control, and European Control Conference 2005,
**2005**.
"""
function root_volume(model::JuMP.Model,
set::Sets.PolarOrNot{<:Sets.ConvexPolySet})
if Sets.convexity_proof(set) === nothing
error("Cannot optimize volume of non-convex polynomial sublevel set.",
" Use PolySet(convex=true, ...)")
end
return ellipsoid_root_volume(model, Sets.convexity_proof(set))
end
objective_sense(::JuMP.Model, ::RootVolume) = MOI.MAX_SENSE
function objective_function(model::JuMP.Model, rv::RootVolume)
return root_volume(model, variablify(rv.set))
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 818 | abstract type AbstractScalarFunction <: JuMP.AbstractJuMPScalar end
struct Volume{S} <: AbstractScalarFunction
set::S
end
Polyhedra.volume(variable::Union{SetVariableRef, Sets.Projection{<:SetVariableRef}, Sets.AbstractSet{<:JuMP.AbstractJuMPScalar}}) = Volume(variable)
function JuMP.set_objective(model::JuMP.Model,
sense::MOI.OptimizationSense,
func::AbstractScalarFunction)
d = data(model)
@assert d.state == Modeling
d.objective_sense = sense
d.objective = func
end
function load(model::JuMP.Model, f::AbstractScalarFunction)
JuMP.set_objective_sense(model, objective_sense(model, f))
JuMP.set_objective_function(model, objective_function(model, f))
end
include("affine.jl")
include("nth_root.jl")
include("L1_heuristic.jl")
| SetProg | https://github.com/blegat/SetProg.jl.git |
|
[
"MIT"
] | 0.4.0 | 7fe53b00c86f4bd820ea7d8d97dcb3cf3287e0eb | code | 941 | function load(model::JuMP.Model, d::Data)
d.state = Loading
for variable in d.variables
load(model, variable)
end
for (index, constraint) in d.constraints
cref = load(model, constraint)
if cref !== nothing
d.transformed_constraints[index] = cref
end
end
if d.objective !== nothing
load(model, d.objective)
end
d.state = Modeling
end
# set as `optimize_hook` to JuMP Model in `data` so it is called at
# `JuMP.optimize!` if at least one set variable is created
function optimize_hook(model::JuMP.AbstractModel)
d = data(model)
set_space(d, model)
# In case `optimize!` is called then the problem is modified and then it is
# called again we need to clear first the space that might be wrong
synchronize_perspective(d)
clear_spaces(d)
create_spaces(d)
load(model, d)
JuMP.optimize!(model, ignore_optimize_hook = true)
end
| SetProg | https://github.com/blegat/SetProg.jl.git |
Subsets and Splits